Hi,
I have successfully converted my tensorflow model from h5 to tflite and was able to use it in python to perform inference on a random test image (see tflite_model_test.ipynb
under Colab Notebooks - Google Drive)
I tried to port it to Android Studio (Arctic Fox), loading the model to assets, and attempted two code paths to perform inference and both returned NAN values (see code below). I have been looking for answers in the last few days and couldn’t find anything that would work. I would really appreciate it if someone can let me know what I may have missed.
Code:
String path = "/storage/emulated/0/Pictures/test_img.png"
try {
if (!OpenCVLoader.initDebug())
Log.e("OpenCV", "Unable to load OpenCV!");
else
Log.d("OpenCV", "OpenCV loaded Successfully!");
// Create inputs for reference.
TensorBuffer inputFeature = TensorBuffer.createFixedSize(new int[]{1, 256, 256, 1}, DataType.FLOAT32);
Mat image = Imgcodecs.imread(path);
Imgproc.cvtColor(image, image, Imgproc.COLOR_RGB2GRAY);
Imgproc.resize(image, image, new Size(256, 256));
// Check if the image is loaded properly
Bitmap testBmp = Bitmap.createBitmap(image.cols(), image.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(image, testBmp);
// normalize first
image.convertTo(image, CvType.CV_32FC(1));
Mat dstFrame = new Mat();
Core.divide(image, new Scalar(255.0), dstFrame);
// load content of image into inputFeature
float[] floats = new float[dstFrame.rows() * dstFrame.cols()];
dstFrame.get(0,0, floats);
byte[] bytes = floatToByte(floats);
ByteBuffer byteBuffer = ByteBuffer.wrap(bytes);
inputFeature.loadBuffer(byteBuffer);
// sanity check to see if retBmp is the same as testBmp, using the Android Studio debug feature,
// I was able to confirm this was the case, the image also matches the input used in colab
float[] floats1 = inputFeature.getFloatArray();
Mat retFrame = new Mat(256, 256, CvType.CV_32FC(1));
retFrame.put(0, 0, floats1);
Core.multiply(retFrame, new Scalar(255.0), retFrame);
Bitmap retBmp = convertMat2Bitmap(retFrame);
// Method 1: using the default path suggested by sample code in Android Studio for test_model.tflite
TestModel model = TestModel.newInstance(this);
TestModel.Outputs outputs = model.process(inputFeature);
TensorBuffer outputFeature = outputs.getOutputFeature0AsTensorBuffer();
float[] data = outputFeature.getFloatArray();
// data is populated with NAN values
int[] bbox = new int[4];
for (int i=0; i<4; i++)
bbox[i] = (int) (data[i] * 256);
// Releases model resources if no longer used.
model.close();
// Method 2: Use Interpreter, and this yields the same result of NAN values
MappedByteBuffer tfliteModel = FileUtil.loadMappedFile(this, "test_model.tflite");
Interpreter tflite = new Interpreter(tfliteModel);
TensorBuffer outputBuffer = TensorBuffer.createFixedSize(new int[]{1, 4}, DataType.FLOAT32);
Mat retFrame2 = new Mat(256, 256, CvType.CV_32FC(1));
tflite.run(inputFeature.getBuffer(), outputBuffer.getBuffer());
float[] data2 = outputBuffer.getFloatArray();
// data2 is populated with NAN values
int[] bbox2 = new int[4];
for (int i=0; i<4; i++)
bbox2[i] = (int) (data2[i] * 256.0);
Log.d("The end", "last line of the code");
} catch (Throwable t) {
// TODO Handle the exception
Log.d("ERROR", t.getMessage());
}