LSTM operator error with TFLite GPU delegate

Hello,

I have a TFLite model that I’m running on a Snapdragon processor and I’m trying to offload the inference to the GPU (to free up CPU resources) by using the TfLiteGpuDelegateV2Create() delegate.
Unfortunately the delegate doesn’t want to offload one of the layers - UniLSTM, and prints the following error message:

INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
UNIDIRECTIONAL_SEQUENCE_LSTM: Operation is not supported.
21 operations will run on the GPU, and the remaining 1 operations will run on the CPU.
INFO: Initialized OpenCL-based API from serialized data.

The “TensorFlow Lite on GPU” help page says that the delegate supports LSTM v2 (Basic LSTM only).

But what does “Basic” and “v2” mean?

By the way the layer in question was created with the following parameters:

layers.append(tf.keras.layers.LSTM(num_recurrent_units, return_sequences=True, return_state=False))

What else could be wrong?

Thanks.

Hi @Stu_Iliev,

Sorry for the delayed response. I am going through the backlog issues. The error might be due to UniLSTM, not supported by TfLiteGpuDelegateV2Create() currently. Could you make sure first the model is converted with basic LSTM by TfLiteGpuDelegateV2Create(). Use the parameter use_bias = True to change it to Basic LSTM in the following layer.
layers.append(tf.keras.layers.LSTM(num_recurrent_units, return_sequences=True, return_state=False, recurrent_activation='sigmoid', use_bias=True)). Share your reproducible code if possible for further assistance.

Thank You