I’ve build the tensorflow-lite.aar
and tensorflow-lite-select-tf-ops.aar
with reference to Уменьшить размер двоичного файла TensorFlow Lite for build a custom aar files for my Tensorflow Lite models on Android.
But unfortunately there were only org.tensorflow.lite.InterpreterApi
inside of them. org.tensorflow.lite.Interpreter
was not found so I could not use Interpreter#runSignature
Did i miss something? How could I build a custom aar files with org.tensorflow.lite.Interpreter?
Hi @Ruoxin_He
Take a look at this. I have created a Colab notebook for this purpose. You can follow the procedure and come back with some feedback.
IMPORTANT = To build tensorflow-lite-select-tf-ops.aar
the free version of Colab is slow and will close after 6-12 hours. You have to use a Pro version or hook it up with a GCP VM Colab instance with at least 20 CPU cores. For only the tensorflow-lite.aar
it is ok.
Regards
@George_Soloupis Thank you for your quick reply! I have already built the tensorflow-lite.aar
file but But after decompiling, I found that it is different from the official one, which doesn’t contain Interpreter
class.
! [screenshot] (https://i.v2ex.co/24Zrdcm7.png)
Hi @Ruoxin_He
If you want to proceed with your project then use the prebuilt library for tensorflow-lite.aar
that exists on Maven repository. Additionaly please fill a bug report at specific Github page for TensorFlow.
Regards
@George_Soloupis Thanks! After reading source code I have found that multi-signature support is a part of the experimental features. I have tried to add experimental = True
to the tensorflow/lite/tools/build_aar.sh and the tensorflow-lite.aar
seems to working (which works well with the maven version of the select-ops). But the selective build of tensorflow-lite-select-tf-ops.aar
still crash.
It seems not a bug but a feature request
I have submitted an issue here: https://github.com/tensorflow/tensorflow/issues/59941
Well, The problem has been solved. Hope it can help others:
Purpose
Make a selective build of TFLite Android to reduce app size with an on-device-training model.
Main points
-
Enable experimental feature: add
experimental = True
to thetflite_custom_android_library()
of tensorflow/lite/tools/build_aar.sh. (On TF 2.11 it’s at line 73) -
Add the option
echo -n "build --config=monolithic" >> /tensorflow_src/.bazelrc
can avoid the_ZNK6google8protobuf7Message11GetTypeNameEv
error fortensorflow-lite-select-tf-ops.aar
-
Building TFLite can take a lot of CPU. I am using the C2-standard-60 with 60 vCPU and 240G RAM, 128G Disk on Google Compute Engine. Once build will take about 45 minutes and costs about $2.
Solved my problem that has been bothering me for a long time, really thank you !!!