We’re using libtensorflowlite_jni.so (from org.tensorflow:tensorflow-lite:2.17.0) in an Android project.
We’re targeting SDK 35+ on Android devices that require 16 KB page size. But when inspecting the .so using readelf, the memory segments are aligned to 4 KB (0x1000), not 16 KB (0x4000). As a result, we get the error: Only 4 KB page compatible.
Question:
Is there an officially supported build of libtensorflowlite_jni.so that supports 16 KB page size?
If not, is there a CMake or Bazel flag to compile TensorFlow Lite with 16 KB alignment for Android?
Info:
TensorFlow Lite version: 2.17.0
Target ABI: arm64-v8a
Min SDK: 23
NDK: r25+
What we tried:
Downloaded AAR from Maven Central
Used readelf -l libtensorflowlite_jni.so to check LOAD alignment
Hi, @kevin_santoki
First of all Welcome to our Google AI Developers Forum, I apologize for the delay in my response The root cause of this issue is that TensorFlow Lite 2.17.0’s pre-built native libraries are compiled with 4KB page size alignment not the required 16KB alignment for Android 15+ devices. When inspecting the libtensorflowlite_jni.so file using readelf -l , the LOAD segments show an alignment of 0x1000 (4KB) instead of the required 0x4000 (16KB).
The officially recommended solution is migrating to LiteRT (formerly TensorFlow Lite) version 1.4.0 , which provides 16KB page size compatibility. code migration is minimal as LiteRT maintains the same API structure as TensorFlow Lite please refer this official documentation of Migrate to LiteRT from TensorFlow Lite
We’re using mobilenetv1.tflite for object detection in our Flutter project, but we’re planning to switch over to LiteRT since Google now requires 16 KB memory page size alignment for Android builds.
I went through some docs but couldn’t find a clear example of how to actually migrate.
A couple of questions:
Do I need a .litert-compatible model, or can I just update the dependencies to the LiteRT ones without changing the existing .tflite model or code?
What are the exact steps to make this migration work with minimal code changes?
What are the right LiteRT-compatible versions for the below dependencies we’re currently using?
Hey @shubhcodeship, Happy to see you migrating. No you do not need to change your model. The mobilenetv1.tflite should work just perfect. Migration is purely at the dependency + namespace levels.
Pls now read for the steps and right LiteRT-compatible versions you asked above:
Dependency Updates (This is New Stack)
The legacy Task Vision libraries (0.4.0) are already deprecated, so pls update your build.gradle to these given below versions which support 16 KB page sizes:
Gradle
dependencies {
// Consolidated LiteRT Vision Task library (replaces task-vision and core)
implementation("com.google.ai.edge.litert:litert-tasks-vision:0.1.0")
// GPU Support
implementation("com.google.ai.edge.litert:litert-gpu:0.1.0")
// Support Library for ImageProcessor
implementation("com.google.ai.edge.litert:litert-support:0.1.0")
}
Namespace Refactoring (Imports)
You can keep your logic static, but you must update the package paths.
Reasons is LiteRT has moved away from the org.tensorflow nameprefix.
To handle the 16 KB Page Alignment:
In future android devices will be using 16kb memory page isntead of 4KB. That is the reason why Google wants us to use 16KB - it will improve performance, just native libraries .sofile has to be aligned during gthe compilation stage.
By switching to the com.google.ai.edge.litert dependencies (v0.1.0+), you are pulling in native binaries that are already pre-aligned to 16 KB.
So the minimal code changes would be - replace all org.tensorflow.lite to com.google.ai.edge.litert.
Second change would be to ensure that ObjectDetector settings BaseOptions now will point to LiteRT versions and not Tensorflow.
This will look like this -
Kotlin
val baseOptions = com.google.ai.edge.litert.tasks.core.BaseOptions.builder()
.useGpu() // Replacing the old Delegate plugin logic
.build()
You can drop your comments for more help here below.
I still have some models that are not compatible with litert. When will there be a version of tflite that is 16kb page size compliant? I tried to build tflite with the bazel options you gave and I can not get it to find the NDK toolchain.
ToolchainResolution: No @@bazel_tools//tools/android:sdk_toolchain_type toolchain found for target platform //tensorflow/tools/toolchains/android:arm64-v8a.
I have ANDROID_NDK_HOME set in my .bashrc Is there another environment variable or setup that needs to be done to help bazel find the toolchain?
"I previously used the org.tensorflow:tensorflow-lite:2.17.0 library for face verification. To ensure compatibility with 16kb page sizes and access the latest features, I have migrated to the new LiteRT library: com.google.ai.edge.litert:litert:2.1.1.
The implementation is working correctly, but I am looking for the most up-to-date versions of the facenet.tflite or mobilefacenet.tflite models. Where can I download the latest pre-trained versions of these models?"