TF-TRT: No Support for TensorRT v8?

Hi all,

I have recently been testing various workflows for optimising inference in production. The non-deprecated workflows that I have found are TF-TRT and conversion to .onnx.

When attempting to convert a Tensorflow 2.6 SavedModel format model using the guidelines published here: Accelerating Inference in TensorFlow with TensorRT User Guide - NVIDIA Docs
I get the following error:
Could not load dynamic library ‘libnvinfer.so.7’

I have a recent version of Cuda (11.4) with up to date Cudnn and TensorRT (v8). The module libnvinfer is installed but at version 8. Does this error indicate that the conversion command trt.TrtGraphConverterV2() only supports tensorRT version8?

Thanks for any help you can provide.

If I remember correctly TensorRT was breaking API so I think that you need to use TensorRT 7:

But Nvidia has a DRAFT PR for TensorRT 8.2 that you can follow at:

https://github.com/tensorflow/tensorflow/pull/52342

1 Like

Thank you for the reply, this does look like a good representation of my problem. I tried downgrading TRT to v7 on a hunch but ran into problems with compatibility with other parts of the build environment. Will keep up with these links and see if I get things working.