I was wondering how I can convert a Tensorflow Lite object detection model created with the TFLite Model Maker. I don’t think this is possible after exporting the model to Tensorflow Lite but it should work if the model is exported as a saved model.
@George_Soloupis’s workflow is the recommended one. As long as you can create a SavedModel for your OD network and ensure it’s supported in OpenVINO you should be good to go. However, I must mention that OpenVINO’s TensorFlow 2 support is still very experimental and limited. So you might want to keep that in mind.
Thanks for the replies. I tried the code suggested by @George_Soloupis, but I couldn’t get the conversion to work. I’m currently getting the following error:
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/content/object_model_maker/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
Unfortunately, it seems like I can’t embed any links, so I can’t share my current notebook with you, but after creating the saved model, I’m using the following code to convert the model:
It seems like even when someone saves the model maker ‘model’ in saved _model format with the provided code here: model.export(export_dir='.', export_format=[ExportFormat.SAVED_MODEL, ExportFormat.LABEL])
and then tries to reload it like: reloaded_model = tf.saved_model.load('./object_detection_model_maker_saved_model/saved_model) reloaded_model.summary()
it throws an error. I have also checked it with the audioclassifier example.
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf2.sh
Note that install_prerequisites scripts may install additional components.
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/content/saved_model/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
Tag @Yuqi_Li at your previous answer.
This is also what I have noticed! summary() is not working after reloading the model and throws the same error: AttributeError: '_UserObject' object has no attribute 'summary'