I’ve installed the cuda toolkit and cudnn on my Ubuntu machine to run my models on the GPU. My LD_LIBRARY_PATH is set to the absolute path, “/usr/local/cuda/include:/usr/local/cuda/lib64.” If I were to run the following script as a .py file, tensorflow can see my GPU.
import tensorflow as tf
tf.config.list_physical_devices('GPU')
The result is [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
.
However, if I were to run the same code block on a Jupyter Notebook, the returned list is empty. I am unsure as to why the notebook cannot see my GPU. I have tried setting its own environment variable to the same value as my $LD_LIBRARY_PATH environment variable, but no such luck.