I have been doing an image classification problem where in the objective is to train a predefined neural network model with set of tfrecords and do inference. This all is happening with reasonable accuracy In Colab.
Subsequent to this I converted the saved_model.pb into model.tflite file. I have checked it against the netron app it is seemingly taking correct inputs (which is an image tensor).
After this I called interpreter.invoke().
Following this when I try to decipher the output tensor I should be able to at least render the output the image, but i am having difficulty in doing this.
This is the link of colab notebook Google Colab where I have maintained the code.
I have other colab notebooks where similar code was done with training for upto 7500 iterations, but i am stuck in all the case at the interpreter level, since i have to port this app on to Android platform
I see at the end of your colab notebook that you try to print the output of the model which is an array of [1,10,4]. So why are you doing that as this is a classification problem as you mentioned? Or this is an object detection problem?
From the previous cells I see that with the model you do:
so you use the model output, you post process it and then visualize the boxes on the image.
You can think the output of the Interpreter as the output of the saved-model. So you have to do the same procedure there also eg. preprocess the image, make the predictions with the Interpreter, post process and create the boxes on the image.
If indeed this is an Object detection project check a good example here:
Hi @George_Soloupis The objective of doing the above colab notebook is to do face recognition by retraining the existing models( in this case i took ssd_mobilenet_v2_320x320_coco17_tpu-8 after testing for models like SSD MobileNet v2 320 x 320 and EffiecientDet D0 512 x 512 )
I have created a notebook where i am doing correct inference with the images in Colab.
Now my effort was to create a model.tflite file, which initially i was unable to do. After further query i came across that the mechanism of creating tflite is like running
I implemented these commands in Colab notebook which i shared earlier, not in the one where i am doing the inferences as they appear in the notebook
I was trying to check if the output of the interpreter call returns an image as i had seen in the colab notebook Selfie2Anime maintained by @Sayak_Paul
This is why i was calling plt.imshow(output()[0])
So how do i go ahead now…
I am trying to understand the notebook link you have shared
I am really lost here. I have not understood the procedure. If you may upload the correct colab notebook to take a look it will be fine! The previous one is just showing an Object detection procedure.
In here i have inferred correctly 2 people , the model was trained only for these 2 people.
In this notebook i failed to create the tflite model, which i did later in the earlier notebook
@George_Soloupis I keep on getting a prompt that link cannot be shared on the post. So i have shared the same via email notification which i received in my gmail account.
@George_Soloupis I am going through the same.
Initially i thought that just by calling interpreter.invoke() and catching the output tensor, i can display back the image.
I will try to understand and see if i can implement it with the links you have shared