Hello,
I have a TFLite model that I’m running on a Snapdragon processor and I’m trying to offload the inference to the GPU (to free up CPU resources) by using the TfLiteGpuDelegateV2Create()
delegate.
Unfortunately the delegate doesn’t want to offload one of the layers - UniLSTM, and prints the following error message:
INFO: Created TensorFlow Lite delegate for GPU.
ERROR: Following operations are not supported by GPU delegate:
UNIDIRECTIONAL_SEQUENCE_LSTM: Operation is not supported.
21 operations will run on the GPU, and the remaining 1 operations will run on the CPU.
INFO: Initialized OpenCL-based API from serialized data.
The “TensorFlow Lite on GPU” help page says that the delegate supports LSTM v2 (Basic LSTM only)
.
But what does “Basic” and “v2” mean?
By the way the layer in question was created with the following parameters:
layers.append(tf.keras.layers.LSTM(num_recurrent_units, return_sequences=True, return_state=False))
What else could be wrong?
Thanks.