TensorRT: #0(zero-based) was expected to be a uint8 tensor but is a float tensor

I am trying to convert EfficientNetD1 model from Tensorflow Model Zoo to TensorRT using this script:

import tensorflow as tf
import numpy as np                                                                                                                                                                                   
from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = './saved_model/'
output_saved_model_dir = './test/'
num_runs = 2                                                                                                                                                                                                                                    conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode="FP16")
conversion_params = conversion_params._replace( maximum_cached_engines=100)                                                                                                                                                                                                                                    converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir,conversion_params=conversion_params)
converter.convert()  

def my_input_fn():
    inp1 = np.random.normal(size=(1,640,640,3)).astype(np.float32)
    yield inp1
    
converter.build(input_fn=my_input_fn)
converter.save(output_saved_model_dir) 

But I only get this error:

I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)

File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute __inference_pruned_174167 as input #0(zero-based) was expected to be a uint8 tensor but is a float tensor [Op:__inference_pruned_174167]

Any suggestion on this issue?

1 Like

Hi @TensorOverflow,

Thank you for using TensorFlow,
As the error indicates please use ndarray of dtype uint8 instead of float32

inp1 = np.random.normal(size=(1,640,640,3)).astype(np.uint8)

please try this code change and try to include more logs to understand the error better,

trt_logger = trt.Logger(trt.Logger.VERBOSE)
converter = trt.TrtGraphConverterV2(
    input_saved_model_dir=input_saved_model_dir,
    conversion_params=conversion_params,
    log_severity=trt.Logger.VERBOSE
)

Thank you.