Hello TF experts!
I’m trying to run a TF-Lite model (Armv7, Linux, C++) converted from a ONNX/TF model.
I already did it for a simple DNN, but since I’m trying a LSTM I get errors at runtime.
I have also updated the TF-Lite library to compile with content of “tensorflow/lite/delegates/flex” folder, but I still get this error at runtime:
“ERROR: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
ERROR: Node number 1 (FlexVarHandleOp) failed to prepare.”
I’m not building TF-Lite with Bazel (specific build env), maybe there are some additionnal things to do?
Hi George,
thanks for the link.
The “tensorflow/lite/delegates/flex” source folder is added to the build via “/tensorflow_src/tensorflow/lite/tools/make/Makefile”
But I have not set the “–config=monolithic” flag. What is its exact purpose? the comment in source is not quite clear to me.
So write here the bazel command that you use to build the TensorFlow Lite libraries using the bazel pipeline.
If I am not aware I can tag a specific person…but we have to show him what you have done already.
I do not see that kind of instructions at the documentation. I think you should try whatever the link is suggesting.
(or modify the script to use additionally the “tensorflow/lite/delegates/flex:delegate”)
Thanks for your advice but unfortunately what I’m trying to do seems to be unsupported.
Building tflite lib with TF operators is not supported when using CMake build system.
Hello @Leroy, I have tried to build with bazel (without adding new ops) just to check the faisability. But it generates shared libs of +350MB which is incompatible with my target.
Moreover Bazel uses a too “new” toolchain and glibc dependency for my legacy environment.
@nicolas_vassal Thank you for your reply. So how did you end up handling it? Did you change the model implementation to avoid the use of tf selected ops?