I am at a point where I am way too deep in the rabbit hole to stop but far enough investedthat I am completely drained and have no energy to go forward (why make it so hard to use it on WSL in the first place?)
My setup
RTX4090
Python 3.11 / 3.12 depending on venv but according to docs only 3.11 has GPU support
I ve had CUDA 12.4 and cuDNN 9.1 running but ive downgraded to CUDA 11.8 and cuDNN 8.5 according to your guide for wsl
$ nvidia-smi
Thu Apr 25 20:16:50 2024
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 551.78 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:06:00.0 On | Off |
| 0% 41C P8 29W / 450W | 2775MiB / 24564MiB | 5% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+
±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 31 G /Xwayland N/A |
±----------------------------------------------------------------------------------------+
python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2024-04-25 20:17:56.627739: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-25 20:17:57.129164: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-04-25 20:17:57.715198: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:06:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-04-25 20:17:57.720250: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Install TensorFlow with pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
as this didnt work I tried to build it myself using bazel 6.5 with clang as my compiler
for configure I tried both cuda as well as cuda and tensorRT support. both variants are ending in a build failures
wasted an entire day for this. wont go for this any longer. if I get help from here it would be fine. if not I am waiting for a time where tensorflow becomes actually userfriendly for WSL