Monday, April 26, 2021

Running IPython in multiple GPUs

Here is a simple configuration to run IPython on different GPUs on a single PC.

1. Check available GPU in PC via terminal, "$ sudo lshw -C display"

Checking number of GPUs

2. Launch "$ CUDA_AVAILABLE_DEVICES=0 ipython3" to use IPython using first GPU

Set IPython to use the first GPU (on OS level)

3. Launch "$ CUDA_AVAILABLE_DEVICES=1 ipython" to use IPython using second GPU

Set IPython to use the second GPU

4. Check "$ nvidia-smi" to confirm both GPUs are running simultaneously

Output of nvidia-smi when two GPUs are used simultaneously

5. For comparison, here is the output of 2 GPUs used in IPython without commands above

IPython without configuration resulting two GPUs output

Ouput of nvidia-smi without configuration

It can be seen from both IPython outputs and nvidia-smi that the configuration works. Each IPython window outputted a single GPU, while without configuration (without adding "CUDA_VISIBLE_DEVICES=X") the output of IPython is two GPUs. Also, the nvidia-smi output shows two GPUs work simultaneously with the given configs, while without config it shows only one GPU is running (the consumed memory of the second GPU only 416MiB meaning idle condition).


Update  2021/05/13

I checked again today, the above-mentioned steps sometimes still failed to choose GPU on multiple GPUs. The following workaround works for me.

Choose GPU 0

import os

# check with
import tensorflow as tf
The same steps apply to GPU 1. These steps also can be applied using the shell command "export".
Please be sure not to make typo (E.g, VISIBLE -> VISBLE), otherwise, the configuration won't work. There is no error message when we make string typos in a shell.
Related Posts Plugin for WordPress, Blogger...