Object Detection Android App creates error with model from tflite-model-maker (it had worked for many weeks a until a few weeks ago)

Hi there,

I am creating a custom Android App for object detection. Therefore, I use the Tensorflow Object Detection Android App from here: examples/lite/examples/object_detection/android at master · tensorflow/examples · GitHub
I am training my models with TFlite model maker with the following code:


!pip install -q tflite-model-maker
!pip install -q pycocotools

#----------------Python code--------------------------
import numpy as np
import os

from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector

import tensorflow as tf
assert tf.version.startswith(‘2’)

tf.get_logger().setLevel(‘ERROR’)
from absl import logging
logging.set_verbosity(logging.ERROR)

spec = model_spec.get(‘efficientdet_lite0’)

test_data = object_detector.DataLoader.from_pascal_voc(‘./test’, ‘./test’, label_map={1: “Ball”, 2: “Spieler Rot”, 3: “Spieler Gelb”})
train_data = object_detector.DataLoader.from_pascal_voc(‘./train’, ‘./train’, label_map={1: “Ball”, 2: “Spieler Rot”, 3: “Spieler Gelb”})
validation_data = object_detector.DataLoader.from_pascal_voc(‘./valid’, ‘./valid’, label_map={1: “Ball”, 2: “Spieler Rot”, 3: “Spieler Gelb”})
model = object_detector.create(train_data, model_spec=spec, batch_size=16, train_whole_model=True, validation_data=validation_data, epochs=1)
model.evaluate(test_data)
model.export(export_dir=‘.’)


This worked without any errors for many weeks. Now I get the following error in Android Studio:
Output tensor at index 0 is expected to have 3 dimensions, found 2.

My dataset is exactly the same and I train on Google Colab. I am sure that I didn’t change anything on the Android App.
I look forward for your answers :slight_smile:

Have an nice day.

Greetings,
Daniel Hauser

Hi @Daniel_Hauser

In which build variant are you getting the error? lib_task or lib_interpreter?
Upload somewhere the tflite file and give us a link so we can verify the output shape.

Thanks

This might be related to InvalidArgumentError: required broadcastable shapes [Op:Mul]. Are you training your model inside Google Colab? If so, have you fixed the Tensorflow version, or are you using the default version used by Colab?

Hi George_Soloupis,
thank you.
I use lib_interpreter. How can I change to lib_task_api?
I uploaded my tflite model to drive:

I used netron and the output neurons had a different name than another model that worked fine.
This model worked without any error:

It was trained about six weeks ago

Recently colab tensorflow version changed from 2.5.0 to 2.6.0
Check if with previous version you can get what you want I will get back to u with info of the .tflite files

Okay thank you. This could be the issue.
I tried the following:
!pip install --ignore-installed --upgrade tensorflow==2.5.0
But I got problems with software dependencies. Later I should spend time to get it working with version 2.5.0
Do you know how to swith to lib_task_api instead of lib_interpreter? :slight_smile:

Oh yes. I forgot that.
Go to left bottom of the Android Studio and change the build variant to lib_task_api

I viewed the 2 files in Netron. They are exactly the same like:

BUT when you print the details of the output tensorsyou have:
Your file:

[{‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:02’, ‘index’: 600, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:0’, ‘index’: 598, ‘shape’: array([ 1, 25, 4], dtype=int32), ‘shape_signature’: array([ 1, 25, 4], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:03’, ‘index’: 601, ‘shape’: array([1], dtype=int32), ‘shape_signature’: array([1], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:01’, ‘index’: 599, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]

and old working file:

[{‘name’: ‘StatefulPartitionedCall:31’, ‘index’: 598, ‘shape’: array([ 1, 25, 4], dtype=int32), ‘shape_signature’: array([ 1, 25, 4], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:32’, ‘index’: 599, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:33’, ‘index’: 600, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:34’, ‘index’: 601, ‘shape’: array([1], dtype=int32), ‘shape_signature’: array([1], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array(, dtype=float32), ‘zero_points’: array(, dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]

I think the output arrays order has been changed. You can go here
https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/android/lib_interpreter/src/main/java/org/tensorflow/lite/examples/detection/tflite/TFLiteObjectDetectionAPIModel.java#L203-L206
and change the order to:

outputMap.put(0, outputScores);
outputMap.put(1, outputLocations);
outputMap.put(2, numDetections);
outputMap.put(3, outputClasses);

and I think your project will work again!

I do not know why this change happened but I feel like we have to tag @khanhlvg and @Yuqi_Li to shed some light or just to inform them.

If you need more help tag me.

Thank you so much!! :slight_smile: It finally worked with tensorflow 2.5.0 and PyYaml Version 5.1.
We changed the lines inside lib_interpreter, but it did not work. I believe that there must be done more changes inside the android app.
I wish you all the best.

Greetings,
Daniel Hauser

1 Like

We are aware of a breaking change in TF 2.6 regarding model signature def, resulted in a change in output tensor order of object detection models created by Model Maker. That the root cause of the issue you raised in the first comment. We’re actively working on fixing it. For the time being, please stick to TF 2.5 when training and running Model Maker for object detection.

3 Likes

Hey I am using the salad object detection colab ( Google Colab ) and am running into this exact same error. I followed the instructions to downgrade the tensorflow to 2.5 with PyYaml 5.1 and still getting error. What am I missing?

Hi @Winton_Cape

Are you getting the error “Output tensor at index 0 is expected to have 3 dimensions, found 2”??
If so…try to change the order of the outputs at the android app. Check my answer above

Best

1 Like

Just so anyone is having issues while using tflite model maker in colab recently and stumble upon this thread. You might want to look at @Winton_Cape’s issue if you have the same one.

Yes, I am getting that error. I did see your answer but I am using the Salad detector object demo. When I import the project into Android Studio, there is no file called TFLiteObjectDetionAPIModel.java. So I don’t know how to change the order of the outputs in this project. This is what worked for me with the Salad Objection Demo.

  1. Changed notebook to CPU by editing the notepook settings

  2. Changed the tensorflow version and modelmaker
    !pip install -q tensorflow==2.5.0
    !pip install -q --use-deprecated=legacy-resolver tflite-model-maker
    !pip install -q pycocotools

  3. Ran all the other cells as is and it worked. I was able to generate a model file (model.tflite) and evaluate that file as well as test the performance of that file on a URL image.

1 Like

I tried this and the app runs for me, with lib interpreter.
Is this the only difference between 2.6 and 2.5 though?
If not then the detection results might not be accurate

Update… as stated above the code is all wrong… the order of the outputs in last two optional steps is wrong…Heres what I did:

  1. Changed the order of the last two outputs of the model. This is in the detect_objects function, the assignment of count and scores is reversed. I changed it to :slight_smile:
    count = int(get_output_tensor(intepreter, 2))
    scores = get_output_tensor(intepreter, 3)
    *** these two are reversed in the tutorial *****

  2. When the results are being assigned just after the above tensor assignment the bounding_box and class assignment should be reversed:
    result = { ‘class_id’ : boxes [i]
    bounding_box : classes[i]

This was the quick fix I did to get it to work… I am sure the code could be optomized.

Hi @Winton_Cape

Is there anyway you can explain this process a little more so I could follow along with my Collab/models

The TF upgrade to 2.6 seems to be having a lot of issues with my output orders and it keeps crashing my Android app. The only models I can get to work in it were created before the upgrade to 2.6.

Any help is greatly appreciated.

Cheers,
Will

Hi Khanh,

Any update on a fix change in output tensor order for Model Maker models? Or even just a sneaky work around, my app works great with models built on 2.5 and I’m trying to avoid rebuilding the app.

Any tips on how to continue building models on 2.5 without running into compatibility errors with dependent packages etc.?

(trying my best to stick with the app from the salad detection collab)

@wwfisher The issue was fixed in the latest version of Model Maker (0.3.4). You can use it with the latest version of TensorFlow. Please be noted that the output TFLite models only work correctly with Task Library version 0.3.1.

You can see the object detection tutorial for details.

1 Like

Hi Will,

Yes, I have faced the same issues. I gave up on using that colab because the output order of the model tensors that is created is different from the model used in the sample app. So I attacked the problem from a different angle. Here’s what I did.

I still used the same object detection phone app example but used a different strategy to create the custom model. I used the Google Cloud Vision API to create the custom object detection model. I ended up relabeling all of my images because I couldn’t figure out a way to convert my label data into the Google CSV format, but other than that their process worked smoothly. The link even uses the same salad example.

Once the Vision API has generated the model, you can download the model from the cloud. That model will have the correct output tensor order and will work with the object detection phone app example. If you need more help let me know.

***** If you can figure out a way to easily transfer label data (VOC) let me know.
***** looks live they solved the issue