I’ve been following along some tutorials in training a custom object detection model using Tensorflow 2.x Object Detection API.
Everything seems to work up until when I try exporting the trained inference graph. Basically, in TensorFlow 1.x, there is a script master/research/object_detection/export_inference_graph.py which is used to export the trained model checkpoints to a single frozen inference graph.
In TensorFlow 2.x, this script no longer works and instead, we use master/researchobject_detection/exporter_main_v2.py which outputs a SavedModel directory and some other stuff, but not the frozen inference graph. This is because in TF 2.x, frozen models have been deprecated.
Does anyone know how I can work around this error, or if there are any other solutions to export an object detection SavedModel into a single frozen inference graph?
I previously used TensorFlow 1 with the export_inference_graph and performed inference on the frozen graphs. Now, I’m attempting to migrate the scripts I used to TensorFlow2 but the inference scripts are still TensorFlow 1 for now, so I wanted to find a way to train models in TensorFlow2 and then still be able to perform inference using the TensorFlow 1 scripts
I’ll look into it. In general I was just wondering if there was any possible way that freezing the graph can be done. It seems like the medium articles you linked above should work but I encountered the “_UserObject has no attribute ‘inputs’” when trying to run the scripts
I think that if this doesn’t create too much rewriting for inference code in the last Colab you could try to work directly from the saved model that it is more natural with TF2.