stan_r
October 25, 2021, 2:29pm
1
Hi all,
I have recently been testing various workflows for optimising inference in production. The non-deprecated workflows that I have found are TF-TRT and conversion to .onnx.
When attempting to convert a Tensorflow 2.6 SavedModel format model using the guidelines published here: Accelerating Inference in TensorFlow with TensorRT User Guide - NVIDIA Docs
I get the following error:
Could not load dynamic library ‘libnvinfer.so.7’
I have a recent version of Cuda (11.4) with up to date Cudnn and TensorRT (v8). The module libnvinfer is installed but at version 8. Does this error indicate that the conversion command trt.TrtGraphConverterV2() only supports tensorRT version8?
Thanks for any help you can provide.
Bhack
October 26, 2021, 11:33am
3
If I remember correctly TensorRT was breaking API so I think that you need to use TensorRT 7:
opened 02:02AM - 13 May 21 UTC
stat:awaiting tensorflower
type:bug
subtype: ubuntu/linux
comp:gpu:tensorrt
TF 2.5
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04… ): Ubuntu 20.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
- TensorFlow installed from (source or binary): binary
- TensorFlow version: 2.4.1, 2.5, etc
- Python version: 3.8
- Installed using virtualenv? pip? conda?: no, built from source
- Bazel version (if compiling from source): 3.1 (for TF 2.4.1), 3.7.2 (for TF 2.5.0-rcx)
- GCC/Compiler version (if compiling from source): gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
- CUDA/cuDNN version: Cuda 11.1, cudnn8 (8.0.5.39-1+cuda11.1) or Cuda-11-2, libcudnn 8.1.1, 8.2,
- GPU model and memory: GTX-1080ti
- TensorRT (crucial): 8.0.0-1+cuda11.0, or 8.0.0-1+cuda11.3
**Describe the problem**
When compiling with support for TensorRT 8 (via libnvinfer8), compilation fails (log is below).
**Provide the exact sequence of commands / steps that you executed before running into the problem**
When configuring the build, make sure you build with TensorRT support, and make sure TensorRT version 8 is selected. Build TF as usual. Compilation will fail.
If you install TensorRT version 7 manually (from debs available for Ubuntu 18.04), compilation will complete just fine.
**Any other info / logs**
Relevant error:
`C++ compilation of rule '//tensorflow/compiler/tf2tensorrt:tensorrt_stub' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command`
`In file included from bazel-out/k8-opt/bin/external/local_config_tensorrt/_virtual_includes/tensorrt_headers/third_party/tensorrt/NvInfer.h:54,
from tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:17:
bazel-out/k8-opt/bin/external/local_config_tensorrt/_virtual_includes/tensorrt_headers/third_party/tensorrt/NvInferRuntime.h:2264:51: note: from previous declaration 'nvinfer1::IPluginRegistry* getPluginRegistry() noexcept'
2264 | extern "C" TENSORRTAPI nvinfer1::IPluginRegistry* getPluginRegistry() noexcept;`
Full log here:
[gesdm-tf2.5.0rc3-error.txt](https://github.com/tensorflow/tensorflow/files/6469944/gesdm-tf2.5.0rc3-error.txt)
But Nvidia has a DRAFT PR for TensorRT 8.2 that you can follow at:
https://github.com/tensorflow/tensorflow/pull/52342
1 Like
stan_r
October 26, 2021, 11:57am
5
Thank you for the reply, this does look like a good representation of my problem. I tried downgrading TRT to v7 on a hunch but ran into problems with compatibility with other parts of the build environment. Will keep up with these links and see if I get things working.