Hello TensorFlow team ,
I’m building TensorFlow from source on an NVIDIA Jetson platform for C++ inference (the Python wheel for TensorFlow‑2.16.1+nv24.08 on JetPack 6.1 works fine). I’m targeting TensorRT 10 with CUDA 12.6, but TF‑TRT’s codebase still uses an NVInfer plugin stub that only supports TensorRT versions up to 8 or 9. Consequently, TF‑TRT fails to detect TensorRT 10.
Specifically, this file shows there is no support for TensorRT 10 yet:
tensorflow/tensorflow/blob/master/tensorflow/compiler/tf2tensorrt/stub/nvinfer_plugin_stub.cc
What I’m trying to do
- Build TensorFlow 2.x from source on Jetson orin nano 8GB (Ubuntu 22.04)
- Use TensorRT 10.0.0 (with CUDA 12.6) for optimized inference via TF‑TRT
Current Behavior
Build fails (or TF‑TRT disabled) because the stub in nvinfer_plugin_stub.cc
doesn’t recognize TensorRT 10 macros.
Expected Behavior
TF‑TRT should detect and support TensorRT 10. Is there an ETA for official TensorRT 10 compatibility? If there’s a patch or branch already in progress, I’d love to help test or contribute.
Environment
Component | Version |
---|---|
TensorFlow | 2.16 |
TensorRT | 10.0.0 |
CUDA | 12.6 |
JetPack | 6.1 (Jetson orin nano 8GB) |
Thank you for any guidance on when TF‑TRT will support TensorRT 10 — or pointers to a workaround in the meantime!