2.10 last version to support native Windows GPU

As stated in the installation guide,

The current TensorFlow version, 2.10, is the last TensorFlow release that will support GPU on native-Windows.

Just wondering what the thinking behind this step is? And I presume it isn’t that binaries won’t be made it simply won’t work? Where as Pytorch supports Windows and JAX you can build it yourself from source.

TensorFlow with GPU access is supported for WSL2 on Windows 10 19044 or higher. This corresponds to Windows 10 version 21H2, the November 2021 update. For instructions please refer Install WSL2 and NVIDIA’s setup docs for CUDA in WSL.

Note: We might consider making the WSL2 path the primary way for ML practitioners to run TensorFlow on Windows/GPU

Thank you.

I understand that TF2 is still going to be available via windows based linux kernel, which means setting up whole new python environments in WSL2 etc.

I am asking why native support is being dropped? And is it just pre-built binaries or is complete support being dropped i.e. JAX can be built to run natively on Windows.

1 Like

I suppose that over time, complete support will be dropped.

@Adam_Hartshorne, @Eric_Yen

Beginning with Tensorflow 2.11, support for GPU on native windows has changed. You can install tensorflow-cpu within windows machines or try the TensorFlow-DirectML-Plugin. Going forward, Tensorflow support will be developed and maintained by Tensorflow Official build collaborators (Intel, AWS, ARM, linaro etc.). For more details please refer to the link. Thank you!

Just to verify, then, if you downloaded the Tensorflow 2.11 source and built it yourself on native Windows, it would still not be able to support GPU, correct?

1 Like

To utilize GPU on windows, you can follow the instructions to build from source. Thank you!

I followed the instructions from this link: Install TensorFlow with pip

It’s far from clean, reporting mutually exclusive library versions for numpy and other modules. I tried to resolve it, but it went down a rabbit-hole of mutually incompatible versions for a host of modules. Hoping I could run tensorflow on my GPU without those, I ran the test command:

python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”

Meaning it didn’t find any GPU.

While it seems to use tensorflow alright, this tells me my kernel may not have NUMA enabled. I’m using the one that got installed by default by WSL2 (Ubuntu 22.04. Are there instructions anywhere to get or build a kernel with NUMA enabled?

I found this link with instructions how to build a Linux kernel https://microhobby.com.br/blog/2019/09/21/compiling-your-own-linux-kernel-for-windows-wsl2

But the article seems rather old, is not specifically for Ubuntu and, most importantly, it doesn’t tell me how to enable NUMA. When I google, I see a lot of questions by people who are stuck with the same issue, so it would be good to have a solid article with instructions.

I tried your suggestion building tensorflow from source. (Since I wasn’t getting anywhere with WSL2) It’s hard to say if I installed all the right tools, as the Visual Studion dev tools are now all version 2022. I checked out the r2.10 branch from the tensforflow Git repo and I decided to try to run configure.py. It asks me a few questions about paths to which I selected the default setting, except for Cuda where I selected ‘Yes’.

Asking for detailed CUDA configuration...

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 11]:


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 2]:


Please specify the TensorRT version you want to use. [Leave empty to default to TensorRT 6]:


Please specify the comma-separated list of base paths to look for CUDA libraries and headers. [Leave empty to use the default]:


WARNING: TensorRT support on Windows is experimental

Could not find any cuda.h matching version '11' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
        'local/cuda/extras/CUPTI/include'
of:

Asking for detailed CUDA configuration...

and it goes in a loop asking the same thing. Did I checkout the wrong release perhaps? Do you have any updated instructions for this?

WARNING: Cannot build with CUDA support on Windows.
Starting in TF 2.11, CUDA build is not supported for Windows. For using TensorFlow GPU on Windows, you will need to build/install TensorFlow in WSL2.

However, wsl2 cannot recognize Non-uniform Memory Access (NUMA).
Even if you use a custom kernel which is compiled with the CONFIG_NUMA option, wsl2 still cannot recognize multiple sockets.
So even if you can use TF with GPU in wsl2, it will dramatically increase the overall code execution time on MP systems.

Hi @Keeyoung, Could you please provide the steps you have followed how you have built your kernel for numa support. Thank You.