Issue:
We’re using libtensorflowlite_jni.so (from org.tensorflow:tensorflow-lite:2.17.0) in an Android project.
We’re targeting SDK 35+ on Android devices that require 16 KB page size. But when inspecting the .so using readelf, the memory segments are aligned to 4 KB (0x1000), not 16 KB (0x4000). As a result, we get the error: Only 4 KB page compatible.
Question:
- Is there an officially supported build of
libtensorflowlite_jni.so that supports 16 KB page size?
- If not, is there a CMake or Bazel flag to compile TensorFlow Lite with 16 KB alignment for Android?
Info:
- TensorFlow Lite version: 2.17.0
- Target ABI: arm64-v8a
- Min SDK: 23
- NDK: r25+
What we tried:
- Downloaded AAR from Maven Central
- Used
readelf -l libtensorflowlite_jni.so to check LOAD alignment
Hi, @kevin_santoki
First of all Welcome to our Google AI Developers Forum, I apologize for the delay in my response The root cause of this issue is that TensorFlow Lite 2.17.0’s pre-built native libraries are compiled with 4KB page size alignment not the required 16KB alignment for Android 15+ devices. When inspecting the libtensorflowlite_jni.so file using readelf -l , the LOAD segments show an alignment of 0x1000 (4KB) instead of the required 0x4000 (16KB).
The officially recommended solution is migrating to LiteRT (formerly TensorFlow Lite) version 1.4.0 , which provides 16KB page size compatibility. code migration is minimal as LiteRT maintains the same API structure as TensorFlow Lite please refer this official documentation of Migrate to LiteRT from TensorFlow Lite
// Replace TensorFlow Lite dependencies with LiteRT
dependencies {
// Remove these:
// implementation 'org.tensorflow:tensorflow-lite:2.17.0'
// Add these:
implementation 'com.google.ai.edge.litert:litert:1.4.0'
implementation 'com.google.ai.edge.litert:litert-api:1.4.0'
implementation 'com.google.ai.edge.litert:litert-support:1.4.0'
implementation 'com.google.ai.edge.litert:litert-metadata:1.4.0'
}
If migration isn’t feasible, build TensorFlow Lite from source with 16KB alignment:
# Bazel build command with 16KB alignment flag
bazel build -c opt --config=android_arm64 \
--repo_env=HERMETIC_PYTHON_VERSION=3.12 \
//tensorflow/lite/java:tensorflow-lite.aar \
--linkopt='-Wl,-z,max-page-size=16384'
For custom builds ensure you have:
- Android NDK r28+ (automatically builds with 16KB alignment) [Ref]
- Android Gradle Plugin 8.5.1+
- Gradle 8.4+
- Target SDK 35+
- NDK r27 Configuration (if using older NDK)
Thank you for your cooperation and patience.
Hi,
We’re using mobilenetv1.tflite for object detection in our Flutter project, but we’re planning to switch over to LiteRT since Google now requires 16 KB memory page size alignment for Android builds.
I went through some docs but couldn’t find a clear example of how to actually migrate.
A couple of questions:
-
Do I need a .litert-compatible model, or can I just update the dependencies to the LiteRT ones without changing the existing .tflite model or code?
-
What are the exact steps to make this migration work with minimal code changes?
What are the right LiteRT-compatible versions for the below dependencies we’re currently using?
implementation(“org.tensorflow:tensorflow-lite-task-vision:0.4.0”)
implementation(“org.tensorflow:tensorflow-lite-gpu-delegate-plugin:0.4.0”)
implementation(“org.tensorflow:tensorflow-lite-gpu:2.9.0”)
And these are the imports for our detection logic:
import org.tensorflow.lite.task.vision.detector.Detection
import org.tensorflow.lite.gpu.CompatibilityList
import org.tensorflow.lite.support.image.ImageProcessor
import org.tensorflow.lite.support.image.TensorImage
import org.tensorflow.lite.support.image.ops.Rot90Op
import org.tensorflow.lite.task.core.BaseOptions
import org.tensorflow.lite.task.vision.detector.ObjectDetector
Hey @shubhcodeship, Happy to see you migrating. No you do not need to change your model. The mobilenetv1.tflite should work just perfect. Migration is purely at the dependency + namespace levels.
Pls now read for the steps and right LiteRT-compatible versions you asked above:
- Dependency Updates (This is New Stack)
The legacy Task Vision libraries (0.4.0) are already deprecated, so pls update your build.gradle to these given below versions which support 16 KB page sizes:
Gradle
dependencies {
// Consolidated LiteRT Vision Task library (replaces task-vision and core)
implementation("com.google.ai.edge.litert:litert-tasks-vision:0.1.0")
// GPU Support
implementation("com.google.ai.edge.litert:litert-gpu:0.1.0")
// Support Library for ImageProcessor
implementation("com.google.ai.edge.litert:litert-support:0.1.0")
}
- Namespace Refactoring (Imports)
You can keep your logic static, but you must update the package paths.
Reasons is LiteRT has moved away from the org.tensorflow nameprefix.
Old Import (TFLite) -> New Import (LiteRT)
org.tensorflow.lite.task.vision.detector.* com.google.ai.edge.litert.tasks.vision.detector.*
org.tensorflow.lite.gpu.* com.google.ai.edge.litert.gpu.*
org.tensorflow.lite.support.* com.google.ai.edge.litert.support.*
To handle the 16 KB Page Alignment:
In future android devices will be using 16kb memory page isntead of 4KB. That is the reason why Google wants us to use 16KB - it will improve performance, just native libraries .sofile has to be aligned during gthe compilation stage.
By switching to the com.google.ai.edge.litert dependencies (v0.1.0+), you are pulling in native binaries that are already pre-aligned to 16 KB.
So the minimal code changes would be - replace all org.tensorflow.lite to com.google.ai.edge.litert.
Second change would be to ensure that ObjectDetector settings BaseOptions now will point to LiteRT versions and not Tensorflow.
This will look like this -
Kotlin
val baseOptions = com.google.ai.edge.litert.tasks.core.BaseOptions.builder()
.useGpu() // Replacing the old Delegate plugin logic
.build()
You can drop your comments for more help here below.