I have a trained Object Detection model using TFLite Model Maker, I can run the model on my Android app with Task Library but it is currently running on Phone’s CPU, how to let the model running on GPU with Task Library (I am running it from Java, not C++)
Sorry about this confuse word, I mean the detection result from Tensorflow Interpreter is not as accurate as the result from Task Library. For example, I trained a model for Vehicle License Plate detection, while the Task Library can output the detection box that cover 100% the plate, the result from Interpreter output the detection box that cover 1/3 of the plate.
We have been updating the Object detection example the past month. It seems that we have to use 2 different getTransformationMatrix methods one for Task Library and one for Interpreter. Or we have to find an abstract class for drawing the boxes like:
Now it is up to you to change the method mentioned above as the models work fine and the problem is afterwards when you want to render results on sreen.
If you have the project online please paste the link to review the problem.
I know this is an old thread, but I’m trying to do something similar as OP.
I’m following the object detector tutorial ( Android를 사용한 객체 감지 | TensorFlow Lite) and I’ve got the App built and running on a device with Delegate set to CPU. However, when I select GPU for the Delegate within the App, I get an in-app error of “GPU is not supported on this device.”
Is it still true that “there is no option to use Task library with GPU”, per 2 years ago?
I imagine my mistake is something really basic, but I’ve tried it on an S22, S23, Pixel 6, and Pixel 7 all with the same result.