I need to build tensorflow from source on my jetson nano as my python version is 3.7.11. Can you provide me with a guid for this ? i have searched alot and tried many methods but unsccessfully.
Hi @Noran_Nabil ,
Welcome to The TensorFlow Forum ,
Please follow the Instructions given on this Link , Also please upgrade to the latest TF version which is more Compatiable and let us know?
Thank You !
Hi, building the latest TF from source requires Python 3.9 and above, unfortunately. You could use Docker or podman to build a container to download and run a recent Python version.
I need this version of TF for my work.
I am following the link you provided. Now i am at the build step , but it results with this error. Can you help me?
za-desktop:~/bazel/tensorflow
/home/azza/bazel/tensorflow/WORKSPACE:19:1
ERROR: An error occurred during the fetch of repository ‘local_config_cuda’: Traceback (most recent call last):
File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 1210
_create_local_cuda_repository(<1 more arguments>)
File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 888, in _create_local_cuda_repository
_get_cuda_config(repository_ctx, <1 more arguments>) File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”,
line 636, in
_get_cuda_config
find_cuda_config(repository_ctx, <2 more arguments>) File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”,
line 614, in
find_cuda_config
_exec_find_cuda_config(<3 more arguments>)
File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”,
line 608, in
exec_find_cuda_config
execute(repository_ctx, <1 more arguments>)
File “/home/azza/bazel/tensorflow/third_party/remote_config/common.bzl”, line 208, in execute
fail(<1 more arguments>)
Repository command failed
Could not find any cuda.h matching version in any subdirectory:
of:
‘include’
‘include/cuda’
‘include/*-linux-gnu’
‘extras/CUPTI/include’ ‘include/cuda/CUPTI’
/usr/local/cuda’
ERROR: Skipping ‘//tensorflow/tools/pip_package: wheel’: no such package '@local config_cuda//cuda’: Traceback (most recent call last):
File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 1210
_create_local_cuda_repository(<1 more arguments>)
File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 888, in _create_local_cuda_repository
_get_cuda_config(repository_ctx, <1 more arguments>) File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 636, in _get_cuda_config
find_cuda_config(repository_ctx, <2 more arguments>) File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 614, in
find_cuda_config
_exec_find_cuda_config(<3 more arguments>)
File “/home/azza/bazel/tensorflow/third_party/gpus/cuda_configure.bzl”, line 608, in _exec_find_cuda_config
execute(repository_ctx, <1 more arguments>)
Have you tried installing the latest version of CUDA assuming that you’re on a Linux env?
If you’re stuck, can you build TF with just CPU settings?
CUDA is already installed of version 10.2
Hold on:
- why are you installing such an old version of TF source?
- do you have the latest Jetpack SDK? It should install CUDA 12.
Building from source is challenging and requires careful coordination between the TF version and the CUDA version. This is especially tricky with Jetson because out of the box TF with CUDA on ARM processors is no longer supported, only Linux x86/64, so it’s best to get the latest device drivers from the vendor, which in this case is Jetpack SDK 6.
That’s how I see it
TF v2.2.0 is a requirement to run models needed for my application. So i need this version that is compatible with CUDA 10.2, this is the reason.
But nvidia supports wheels for older versions of jetpack, CUDA and python.
Hmm. After reading this, I personally would be very tempted to abandon this project or inform my stakeholders that it is a challenging ask. I wonder thus the quality of NVidia’s software support.
Now of course that’s Plan B Let’s tackle the problem. If we look carefully in your 2.2.0 stack trace, the issue is either a file that’s misplaced or needs renaming.
find_cuda_config
is being called, and calls_exec_find_cuda_config
which is a hack to group all the CUDA library files into a compressed (zlib) archive- But if we look at the supplied stack trace, line 208 of the remote execution function,
execute
tried looking recursively forcuda.h
and couldn’t find it.
This is where you roll up your sleeves and get your hands dirty, like this person. I can’t help you as I don’t have access to the SDK nor the hardware. If you can work out from here where the missing file is or manually amend/hardcode the source, there is a chance this can be solved.
I think it’s worth remembering that not all ML tasks need a GPU, like training Gradient Boosted Decision Trees.
That’s all I can assist for now
Note to self: remember to try to avoid ARM boards for dev/deployment!
can i ask fristly if the combination of TensorFlow 2.2 with CUDA 10.2 + Ubuntu 18.04 + Python 3.7 is supported ?
@Noran_Nabil, that is a very difficult question to answer, as the project is rather silent on these specifics.
We can tell that CUDA v10 is supported but which minor version, no idea.
We can also check the Nano Nvidia Compute Capability - which seems to be 5.3, as per the screenshot below:
5.3 is greater than the minimum required by TF 2.2.0 (3.0), so your CUDA/GPU appears to be supported.
But like I said, for everything else, TF is silent.
tl;dr - I have no idea!