Hello, I am trying to create chess pieces image classification program running on my phone. I trained my model using transfer learning technique. Made 6 classes for the six figures and converted my tf model to tflite model. When I use the model with a labelmap in the android project I get an error stating: Caused by: java.lang.IllegalArgumentException: Label number 6 mismatch the shape on axis 1
. The whole error log is here: https://pastebin.com/bxdq9x1r
.
This is my tflite model: https://www.pastefile.com/vpg57x
This is my label map: https://www.pastefile.com/9rg9v7
The label map URL seems unreachable. Can you check it?
This is my label map: https://www.pastefile.com/ncfyht
.
The link was giving error and I pasted it again.
If needed I can show the code for training the model and converting it to tflite as well as the java code for the android app.
I am using this command for the conversion of tf model to tflite model: command = "tflite_convert \ --saved_model_dir={} \ --output_file={} \ --input_shapes=1,300,300,3 \ --input_arrays=normalized_input_image_tensor \ --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \ --inference_type=FLOAT \ --allow_custom_ops".format(FROZEN_TFLITE_PATH, TFLITE_MODEL, )
.
If you look at input_shapes, I think this is the image shape that the model will be used to, so maybe when i make a picture with my phone I get different shape for example 2000x3000px and then the error occurs. Can it be that one?
In the interpreter API you have an utility to resize your input.
Check
Hello again, so I checked it and the error was coming from the size of the input image. So I changed the size to match the input size of the model or in my case 640x640. But now I am facing two new errors one is: Cannot copy from a TensorFlowLite tensor with shape [1, 10, 4] to a Java object with shape [1, 6]
, which means that my shape(which consists of 6 chess figures) does not match the java code which expects 10, 4 so I find this line in the class: imgData = ByteBuffer.allocateDirect( 4 * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
and I delete the 4 so this error dissapears, but new one appears: java.nio.BufferOverflowException
. I apply my interpreter class now, please help me I really am in a pickle: https://pastebin.com/Hti2zeJm
. This exception appears at line 189. Thanks in advance.
Hi Ivan,
I do not think your assumption is correct. Check an error that I forced to appear in my AS project. E/EXECUTOR: something went wrong: Cannot copy to a TensorFlowLite tensor (input) with 1769472 bytes from a Java Buffer with 2211840 bytes.
So my model expects 1769472 input and gets 2211840. In your case it expects input [1, 6] and gets [1, 10, 4].
Please check your .tflite file with netron.app to get a visualization of your .tflite file. You are always welcome to give us a link of your project to build and help you.
Best
Hello, thank you a lot for the answer! I am adding my github repo link so you can download the project and look at the error yourself. My github link: https://github.com/ivanpetrov95/chesshelper/tree/master
.
Now for the error, I simulated such error by replacing my 640 px size to something bigger or lower, but again I am newbie in the tensorflow area so I might be wrong. My model is in the assets as well as the labelmap and everything needed. I checked my tflite file in netron and I saw that way my needed input must be 640x640x3(3 I think is the RGB, but again I am a newbie), but I could not “read” the output.
Thank you again for looking at my problem and answering me!
Hi Ivan,
I will look into your project and I will get back to you when I figure out what is going on with the inputs to the Interpreter. But you have to definetely check the conversion of your model to tflite because I have also seen that netron.app does not show results for the output of the model. Something is going on there!
Best
Hi Ivan,
I built your project and took a look into it. The result is that your model outputs an array of [1][10][4] and you are givving it an output array of [1][6]. So the problem is at your output of the model. I think you have to check again the conversion to tflite (if pure tensorflow code works correctly) and check it with netron.app. I really do not know what the output should be, so if it has to be an array of [1][6] after the correct convertion you will see this at netron.app.
Besides that there are some other coding issues at your project that I managed to overcome…but they are minor and firstly you have to check the conversion.
Happy to help again. Ping me anytime.
Best
Hello, I looked at my model in netron and indeed I sawo 10 nodes at the output I suppose you talk about them when you say 1, 10, 4. Attaching an image:
https://pasteboard.co/K3mBbgg.png
I have followed a tutoril and I am pretty new to this area, so sorry if I do not have good knowledge about that. I will try building another model and look at it and get in touch with you again. THANK you a lot for the help!
Hello again, I have a problem with converting my .pb file to tflite. I am using tensorflow 1.14 now and I am trying to create the tflite file with tflite_converter. The command is: tflite_convert \ --graph_def_file=tmp/frozen.pb \ --output_file=tmp/model.tflite \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --input_arrays=input_tensor \ --output_arrays=output_pred \ --input_shapes=1,224,224,3 \
. The error I get is: ValueError: The shape of tensor 'input_tensor' cannot be changed from (?, ?) to [1, 224, 224, 3]. Shapes must be equal rank, but are 2 and 4
Also this is the frozen graph file I am trying to convert to tflite with the command: https://www.pastefile.com/ahahwt
.
Thank you for the help.
Let’s tag @Sayak_Paul and ask him if he has a colab notebook demonstrating the appropriate way of converting TensorFlow 1.14 .pb files to tflite.
Here’s one:
@thea is it possible to install plugins for better code renditions? I would suggest installing at least two: regular Python code and Jupyter Notebook code.