TF-TRT Warning: Could not find TensorRT

Couldn’t resolve TF-TRT Warning: Could not find TensorRT

I’m using wsl2 in windows 11.

Distributor ID: Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:        22.04
Codename:       jammy

Previously, I wasn’t able to have GPU as the backend, I had tried all the methods in installing tensorflow in wsl.

But, trying with python3.11 and tensorflow 2.15.1, got me access to the GPU backend.

conda create -n tmpenv python=3.11
conda activate tmpenv
pip install tensorflow[and-cuda]==2.15.1

versions:

tensorflow                    2.15.1
tensorrt                      10.0.1

nvidia-smi:

Tue May 21 15:45:23 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.76.01              Driver Version: 552.44         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4080 ...    On  |   00000000:01:00.0 Off |                  N/A |
| N/A   38C    P4             17W /   55W |       0MiB /  12282MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
2024-05-21 15:46:26.487553: I external/local_xla/xla/service/service.cc:168] XLA service 0x7fcc3f7b2c00 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2024-05-21 15:46:26.487663: I external/local_xla/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce RTX 4080 Laptop GPU, Compute Capability 8.9

I’ve tried all ways of installing tensorrt - pip, .deb, .tar;
updated path variable, etc.
Its importing in python shell

>>> import tensorrt as trt
>>> trt.__version__
'10.0.1'
>>> trt.__file__
'/home/mohan/miniconda3/envs/tmpenv/lib/python3.11/site-packages/tensorrt/__init__.py'

All the files are available (.tar installation method)

(tmpenv) mohan@LAPTOP-INPO8147:~$ ls TensorRT-10.0.1.6/
bin  data  doc  include  lib  onnx_graphsurgeon  python  samples  targets

(tmpenv) mohan@LAPTOP-INPO8147:~$ ls TensorRT-10.0.1.6/lib/
libnvinfer.so                          libnvinfer_plugin.so.10.0.1
libnvinfer.so.10                       libnvinfer_plugin_static.a
libnvinfer.so.10.0.1                   libnvinfer_static.a
libnvinfer_builder_resource.so.10.0.1  libnvinfer_vc_plugin.so
libnvinfer_dispatch.so                 libnvinfer_vc_plugin.so.10
libnvinfer_dispatch.so.10              libnvinfer_vc_plugin.so.10.0.1
libnvinfer_dispatch.so.10.0.1          libnvinfer_vc_plugin_static.a
libnvinfer_dispatch_static.a           libnvonnxparser.so
libnvinfer_lean.so                     libnvonnxparser.so.10
libnvinfer_lean.so.10                  libnvonnxparser.so.10.0.1
libnvinfer_lean.so.10.0.1              libnvonnxparser_static.a
libnvinfer_lean_static.a               libonnx_proto.a
libnvinfer_plugin.so                   stubs
libnvinfer_plugin.so.10

(tmpenv) mohan@LAPTOP-INPO8147:~/TensorRT-10.0.1.6/python$ ls
tensorrt-10.0.1-cp310-none-linux_x86_64.whl
tensorrt-10.0.1-cp311-none-linux_x86_64.whl
tensorrt-10.0.1-cp312-none-linux_x86_64.whl
tensorrt-10.0.1-cp38-none-linux_x86_64.whl
tensorrt-10.0.1-cp39-none-linux_x86_64.whl
tensorrt_dispatch-10.0.1-cp310-none-linux_x86_64.whl
tensorrt_dispatch-10.0.1-cp311-none-linux_x86_64.whl
tensorrt_dispatch-10.0.1-cp312-none-linux_x86_64.whl
tensorrt_dispatch-10.0.1-cp38-none-linux_x86_64.whl
tensorrt_dispatch-10.0.1-cp39-none-linux_x86_64.whl
tensorrt_lean-10.0.1-cp310-none-linux_x86_64.whl
tensorrt_lean-10.0.1-cp311-none-linux_x86_64.whl
tensorrt_lean-10.0.1-cp312-none-linux_x86_64.whl
tensorrt_lean-10.0.1-cp38-none-linux_x86_64.whl
tensorrt_lean-10.0.1-cp39-none-linux_x86_64.whl

I’ve also used the files from .tar method to install using pip:

(tmpenv) mohan@LAPTOP-INPO8147:~/TensorRT-10.0.1.6/python$ python3 -m pip install tensorrt-10.0.1-cp311-none-linux_x86_64.whl
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Processing ./tensorrt-10.0.1-cp311-none-linux_x86_64.whl
tensorrt is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.

(tmpenv) mohan@LAPTOP-INPO8147:~/TensorRT-10.0.1.6/python$ python3 -m pip install tensorrt_dispatch-10.0.1-cp311-none-linux_x86_64.whl
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Processing ./tensorrt_dispatch-10.0.1-cp311-none-linux_x86_64.whl
tensorrt-dispatch is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.

(tmpenv) mohan@LAPTOP-INPO8147:~/TensorRT-10.0.1.6/python$ python3 -m pip install tensorrt_lean-10.0.1-cp311-none-linux_x86_64.whl
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Processing ./tensorrt_lean-10.0.1-cp311-none-linux_x86_64.whl
tensorrt-lean is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.

But still I get this error

2024-05-21 15:46:19.659564: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2 Likes

After many tries, following worked for me. Try this:

conda create -n tmpenv python=3.11
conda activate tmpenv1
conda install nvidia::cuda-toolkit
conda install conda-forge::cudnn
conda install conda-forge::tensorflow

thanks but, it isn’t working… IDK…

>>> import tensorflow as tf
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/mohan/miniconda3/envs/tuf/lib/python3.11/site-packages/tensorflow/__init__.py", line 40, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mohan/miniconda3/envs/tuf/lib/python3.11/site-packages/tensorflow/python/pywrap_tensorflow.py", line 34, in <module>
    self_check.preload_check()
  File "/home/mohan/miniconda3/envs/tuf/lib/python3.11/site-packages/tensorflow/python/platform/self_check.py", line 63, in preload_check
    from tensorflow.python.platform import _pywrap_cpu_feature_guard
ImportError: /home/mohan/miniconda3/envs/tuf/lib/python3.11/site-packages/tensorflow/python/platform/../../libtensorflow_framework.so.2: undefined symbol: _ZTIN6snappy4SinkE

I am having the same problem… Have you solve this?

Not yet @SangMin_Lee

I am having the same problem. I have (ubuntu 22.04.4 LTS)Nvidia driver 535, cuda 12.2(with suggested cudnn 8.9.3). python3.10 can import tensorrt without any errors, but when I import tensorflow I get the same error below.

tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

The tensorrt page suggests cudnn 8.9.7 for TRT 10.1.0, but that might create issues with cuda 12.2. I am open to suggestions.