Hey all!
Not sure if this is the right place to post.
I think in the current version of VertexAI/AutoML Vision for Object Detection, there is a bug with tensorflow lite exporting. This is something new, it worked perfectly fine a month ago.
Specifically, the TensorFlow Lite object detector’s confidence scores cap at 0.5. It’s not the model itself, since the TFJS export works well without the 0.5 maximum on the bounding boxes.
I think it’s just a bug in AutoML/Vertex AI that renders the TFLite Object Detection model completely unusable.
I reported the issue here, but just wanted to give it more visibility since it’s a complete roadblock for anyone using Object Detection with Vertex AI/AutoML.
Let me know if there’s anything I can do to help! Happy to test any fixes and verify they work
1 Like
Whats odd is one environment works, and the other one I’m running is getting the RunTimeError:
Failed to prepare for TPU. Failed precondition: Package requires runtime version (14) which is newer than this runtime version (13). Node number 2 (EdgeTpuDelegateforCustomOp) failed to prepare
Saw the same error on TF Lite models for raspberry pi… any ideas how to update from v13-14.
Interesting update:
Seems like a different maximum/capped score for the different model options on AutoML:
0.6 max on Low Latency option
0.5 max on Best Tradeoff option
Both models work well in TensorFlow.JS format, with no cap on scores - so its definitely specific to the TFLite model conversion in AutoML Vision.
Additionally, I ran the export again today and still have the same issue - so it has not been resolved yet.
@Alex_Seguin
Can you try running it on a computer in Python?
I believe it is a problem with the model itself, not the runtime environment that it is running on. Would be very interested to learn otherwise.
Let me know!
Here is a handy script to test a .tflite model on any computer running Python
import numpy as np
import tensorflow as tf
import cv2
img = cv2.imread("test.jpeg")
new_img = cv2.resize(img, (320,320))
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
interpreter.set_tensor(input_details[0]['index'], [new_img])
interpreter.invoke()
# Show the scores
scores = interpreter.get_tensor(output_details[2]['index'])
print(scores)