Facing issue while invoking tensorflow lite interpreter

Hi Team,

I am facing issue while trying to invoke the interpreter

File “/home/ubuntu/Documents/pythonProject1/tensorflow_lite.py”, line 48, in

  • interpreter.invoke()*
  • File “/home/ubuntu/Documents/pythonProject1/venv/lib/python3.8/site-packages/tflite_runtime/interpreter.py”, line 917, in invoke*
  • self._interpreter.Invoke()*
    RuntimeError: Select TensorFlow op(s), included in the given model, is(are) not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. For the Android, it can be resolved by adding “org.tensorflow:tensorflow-lite-select-tf-ops” dependency. See instructions: https://www.tensorflow.org/lite/guide/ops_selectNode number 5 (FlexTensorListReserve) failed to prepare.

Could you please help me with this issue.

Regards,
Sonali

Hi @Sonali_Faldesai

Do you have the code that you have converted your .tflite file?
It seems that you have not included select_tf_ops while converting.
Take a look at this.

Regards

Hi @George_Soloupis ,

I have used the select_tf_ops while converting the .hs file to tflite model,

PFB the code,
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()

with open(‘model.tflite’, ‘wb’) as f:
f.write(tflite_model)

Regards,
Sonali

It seems that the documentation is clear about these ops. They are installed automatically.
Is your version above 2.3?

Hi @George_Soloupis ,

Currently using linux system with tensorflow == 2.12.0, lite-runtime == 2.11.0.
Could you please help me how to check the ops version?

Best Regards,
Sonali

Are you using tflite_runtime package? Because this is not including select ops as per the note.
Can you check also this?

Yes, i am using tflite_runtime package for running the build model.

PFB the conversion code used
import tensorflow as tf

          def r2_keras(y_true, y_pred):
              """
                  Coefficient of Determination
              """
              SS_res =  K.sum(K.square( y_true - y_pred ))
              SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
              return ( 1 - (SS_res/(SS_tot + K.epsilon())) )
          
          model = tf.keras.models.load_model('pm_regression_model.h5', custom_objects={'r2_keras': r2_keras})
          converter = tf.lite.TFLiteConverter.from_keras_model(model)
                    converter.target_spec.supported_ops = [
                        tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
                        tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
                    ]
          converter.target_spec.supported_types = [tf.float16]
          tflite_model = converter.convert()
          
          with open('tflite_model.tflite', 'wb') as f:
              f.write(tflite_model)

If the i am not using
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
then getting below error

            /home/hasher/.local/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py:1276:0: error: failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
            <unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
            <unknown>:0: error: Lowering tensor list ops is failed. Please consider using Select TF ops and disabling `_experimental_lower_tensor_list_ops` flag in the TFLite converter object. For example, converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]\n converter._experimental_lower_tensor_list_ops = False

Could you please help me with the conversion of .h5 model without using ops.

Have you tried to use this instead of tflite_runtime? It is the usual way if you do not have a raspberry or a microcontroller.

Yes, with this it is working as expected.

But we are currently running it on a edge device (NXP gold box) which has limited space so not able to install tensorflow library as it is 1.6 gb.

Do you have any suggestions for this issue.

There is a documentation here, but I am not sure if you can build wheel for tflite-runtime that includes Select TF ops
Let’s tag @khanhlvg to shed some light if the wheel building for tflite-runtime can include also these ops.

Hi @George_Soloupis ,

Any updates on this?

Hi @George_Soloupis,
I have this problem when I build flutter desktop app, Do you know how to fix it?
Thanks

Is this issue resolved? I am having same issue and my case is similar to @Sonali_Faldesai. @George_Soloupis?

The issue arises because your TensorFlow Lite model uses TensorFlow operations that are not natively supported by the standard TensorFlow Lite interpreter. To resolve this, you must enable the TensorFlow Lite Flex delegate, which allows the interpreter to execute the unsupported TensorFlow operations.

@Tim_Wolfe , do you have any documentation on how to do that?

I am still getting below message after selecting TFLITE_BUILTINS.

The following operation(s) need TFLite custom op implementation(s):
Custom ops: CombinedNonMaxSuppression, ResizeBilinear
Details:
tf.CombinedNonMaxSuppression(tensor<?x49104x1x4xf32>, tensor<?x?x1xf32>, tensor, tensor, tensor, tensor) → (tensor<?x300x4xf32>, tensor<?x300xf32>, tensor<?x300xf32>, tensor<?xi32>) : {_cloned = true, clip_boxes = false, device = “”, pad_per_class = false}
tf.ResizeBilinear(tensor<?x?x?x3xui8>, tensor<2xi32>) → (tensor<?x?x?x3xf32>) : {align_corners = false, device = “”, half_pixel_centers = true}
See instructions: 自定义算子  |  TensorFlow Lite

Hi, I had the same problem when trying to save an LSTM model (for forecasting and not for classification) to TfLite. I ended doing this and it worked:

def save_model_to_file(model, tflite_model_name, input_size):
  if tflite_model_name.split('_')[0] == 'lstm' or tflite_model_name.split('_')[0] == 'rnn':
          run_model = tf.function(lambda x: model(x))
          concrete_func = run_model.get_concrete_function(
              tf.TensorSpec([input_size, 1], model.inputs[0].dtype))
          
          model.save(tflite_model_name, save_format="tf", signatures=concrete_func)
          converter = tf.lite.TFLiteConverter.from_saved_model(tflite_model_name)
          tflite_model = converter.convert()
          # Save the model
           with open(f'{tflite_model_name}.tflite', 'wb') as f:
                f.write(tflite_model)