Things to know when deploying a classification model on an Android device

Tensorflow has some great libraries supporting the deployment of classification modes on Android devices. Making sense of them is a little bit of a struggle if you’re new, so I’m making this post to help direct people to the most important parts of each library.

Checking out one of the many TF demonstration applications can put you in a really good spot in terms of understanding what the flow of the application will look like, but it is not enough to look at their code loosely and understand it…You should code it yourself!

Here is their classification demo app.
Here is MLkits all in one demo app, quite beefy(this app is just kinda cool and has some useful classes in it).

All of these app are written in Java, but I would recommend using Kotlin, as Android is trying really hard to get people to use it(they have a lot of support for it).

Correctly configuring android studios is always a pain so be ready for that. Lots of incompatibility with tensorflow libraries and android studio SDK’s. Take your time here and google the errors you get. Its all very overwhelming to begin with, so know you’re alone :slight_smile:

CameraX is going to be the first Android library that you will have to get intimate with. It has a nice ImageAnalysis feature which is made to grab images from the camera feed and then run some computation on that image.

Bitmaps are an integral part of any computer vision application, as this is the most common way to manipulate images. One snag that is common is converting the camera feed to a bitmap. The camera shoots out YUV images, and these are not very useful. Here’s a gist to convert the output of an ImageAnaysis to a bitmap.

TFlite provides two different libraries that enable model execution on an android phone.
The Support Library and the Task Library. Link.

The task_library is easier to use, but does not have as much flexibility as the support_lib. I would recommend using the task_library. One important differentiation between the two is that the support_lib allows for batch inference, while the task_lib does not.

For classification, you have to make sure that the image you’re sending into the model is exactly the same dimensions as the model input. To do this, you need to size the bitmap you have accordingly. This usually means taking a subsection of the bitmap, or shrinking the bitmap down into an acceptable size.

Once you have a correctly sized bitmap, you can use the handy TensorImage.fromBitmap(bitmap) to create a TensorImage(function provided in the support_lib api). Once you have a tensor image, and If you’ve initialized tflite with the task_lib api, all you have to do is send the image into classify function and you’re good to go! Results come back as a list of classifications.

What you do with these results is up to you, but usually you want to update the screen in some way to let the user know that the model had detected something of interest. Check out these classes from the MLkit demo library. You’ll also need the Graphic Overlay and the Frame Metadata classes to make this work!

If you have any questions or want me to elaborate drop a comment.

I hope this helps someone!

2 Likes

Nice post Isaac

Just to add some more resources:
for Task Library, here’s a good start: TensorFlow Lite Task Library

1 Like