I am trying to download deep lab cut. I have a GPU, Cuda, and cuDNN ready to go, but I can’t find info about TensorFlow compatibility with CUDA after the TF version 2.9.0.
Welcome to the Tensorflow Forum!
No, Please refer to the below tested build configurations
Note: All the existing Tensorflow nightlies and Tensorflow 2.12 (yet to release) are compatible with CUDA 11.8 support.
Thank you!
Could you please update this to the page 从源代码构建 | TensorFlow ?
Seems the page hasn’t been updated for some time.
Thanks for reporting.
Hi @chunduriv , I have a machine with CUDA 11.6. Is Tensorflow 2.12.0 compatible with this version of CUDA? Or if not, is this on the future roadmap? Thanks
I don’t think it is on road map. According to tested build configurations latest tensorflow 2.12 is compatible with CUDA 11.8.
Thank you!
Ok thanks @chunduriv. Do I need to install Visual Studio on my machine for Tensorflow 2.12.0 to identify CUDA version 11.8 GPU on my machine? If yes, then what components of Visual Studio do I need to install? Thanks for your assistance in advance.
I have upgraded the version of CUDA to 11.8, however, when I run the below code, the result is ‘0’.
print("Num GPUs Available: ", len(tf.config.list_physical_devices(‘GPU’)))
Am I missing something?
Thanks
As per TF official website, If you are using Windows Native, “TensorFlow 2.10
was the last TensorFlow release that supported GPU on native-Windows”.
Therefore, if you want to use CUDA in your machine, downgrade to TF 2.10 - 2.9, CUDA to 11.2 and cuDNN to 8.1 (for CUDA11.2).
Hi Veda, Thanks for your response. I have tried the approach suggested by you, however, Tensorflow still doesn’t recognise the GPU on my laptop. Any suggestions would be greatly appreciated. Thanks
Hi Nikit,
The config I suggested in my post should work, even with older GPUs (my laptop has GEFORCE 940MX, which is not listed even under NVIDIA-CUDA-supported devices). However, getting this working is not an easy job, apparently, there is a sequence that you should follow. I will suggest what I tried:
-
Uninstall every NVIDIA driver, including CUDA (This will cause issues, but they are temporary.)
-
Restart the computer and install the latest compatible NVIDIA drivers from the official website.
2A. Install Visual Studio 2017 community (Mandatory; do not install later versions as they are incompatible with TF2.10 and CUDA 11.2) with the “Desktop Development with C++” workload. You can install other workloads, but they are not mandatory. This version is hard to find, but, without this, Cuda does not work, good luck.
2B. Install python 3.10 version as tf 2.10 is not supported in later versions of python
-
Download CUDA 11.2 for Windows, and install it (cuda_11.2.0_460.89_win10)
-
Download cuDNN 8.1 for CUDA 11.2 (cudnn-11.2-windows-x64-v8.1.1.33)
-
extract cuDNN, copy all files from the “lib, include and bin” folders from the extracted cuDNN folder to the respective folders (with the same name) in the CUDA directory (in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2)
-
Go to environment variables and add your CUDA folder paths to ‘Path’ (in both sections):
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp
Also, ensure “CUDA_PATH” and “CUDA_PATH_V11_2” variables are automatically added to System Variables.
-
Install tensorflow 2.10 in the ‘venv’ of your project, e.g., open your project in pycharm/VSC, go to terminal and type pip3 install tensorflow=2.10. This is very import step. this gets complicated if you are using Jupiter/no IDE - search in google how to create/work with venv in this case.
-
If step 7 did not work, install tensorflow 2.10 globally (i.e., in cmd terminal) and then create a fresh new project. MAKE SURE THAT YOU SELECT / CHOOSE ‘INHERIT GLOBAL PACKAGES’ checkbox. If not, then once you create a new project, open venv terminal and follow step 7 above.
now you can print(tf.config.list_physical_devices(‘GPU’)) and hopefully you see device:0 (i.e., tf using the GPU)
Getting the GPU working with TF is one small step - if you do not batch your input pipeline properly, your computer will run slower with ML training running on GPU. For more info, check this link: Low NVIDIA GPU Usage with Keras and Tensorflow - Stack Overflow