I am starting with the topic of object detection, and I am following a tutorial. [(Installation — TensorFlow 2 Object Detection API tutorial documentation) I have followed it step by step, but no matter how hard I try, I always have problems with the incompatibility of the libraries. Because it is an example from a couple of years ago, does it become impractical? Or is there some way to do it. I already did the model garden with resnet50, but I want to have a variety of knowledge with multiple models. How should I proceed? thank you very much to all
pd. I have tried it from Google Colab, and from virtual environments, trying to install the specific libraries, but there are some libraries that update other libraries and I think it becomes impractical, or am I going the wrong way?
Hi @David_Vahos
FWIW
Does your setup match requirements?
If yes, I would report a new issue on the github of this project.
And remember to share/post error messages you get. It is always helpful.
Thanks.
The three tutorials you mentioned are very useful. They show the fine-tuning using the Model Garden training experiment framework, which can display the training metrics for the training and validation sets.
What if, after training, I want to evaluate the model using a third split of the dataset (the test set)? I just need to get the same type of metrics (AP) displayed during training. Can I do this using the Model Garden?
Thanks for your reply @Japheth_Mumo. The video you linked is about Tensorflow Object Detection API, but I am actualy using the TF-Vision Model Garden.
According to the README, TensorFlow Object Detection API is deprecated:
@Siva_Sravana_Kumar_N, I found a workaround: after training, i run the experiment a second time. This time I use the test set in place of the validation set, and configure the experiment mode to 'eval' instead of 'train_and_eval'. And for model_dir I use a copy of the original model_dir directory, in order to not to mix the actual validation logs with the test logs.
Hi, thanks for sharing the work around on evaluating with the test set, it is very useful.
The training and val went well for me, but when I export to test the pertaining performance (before I train). I export the model using the snippet as in the tutorial:
But after having the saved_model.pb, I loaded it up to infer, it returned blank outputs(as in the pic in this issue comment), I don’t know what to look at next since I just pull config and checkpoint from source.