Hello everyone!
This is my first neural network, so there are often problems. And now there is a problem that I can’t solve.
My network produces a binary classification (patient is healthy, patient is sick). The input layer is fed 12 numeric values. I created and trained a neural network in Collab, it trained well and shows acceptable results on the validation sample (val_accuracy: 0.95
val_loss: 0.13), but after converting the model to .tflite and running it on a smartphone, it can’t predict anything.
I changed the number of layers, converted the model with tf.lite.TFLiteConverter.from_saved_model and tf.lite.TFLiteConverter.from_keras_model, viewed .tflite in Netron, tried to change the data input in Android, but nothing helped.
I think the problem is the wrong data transfer to the input layer of the tflite model in Android, but this is just a guess. And if so, please tell me how to fix the error?
This is my Colab code
raw_dataset = pd.read_csv('data.csv')
dataset = raw_dataset.copy()
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
X = train_dataset.values
Y = test_dataset.values
Y = np.array(Y).astype("float32")
test_x, test_y = X[:,0:12], X[:,12]
train_x, train_y = Y[:,0:12], Y[:,12]
model = models.Sequential()
model.add(layers.Dense(64, activation = "tanh", input_dim=12))
model.add(layers.Dense(32, activation = "tanh"))
model.add(layers.Dense(16, activation = "tanh"))
model.add(layers.Dense(1, activation = "sigmoid"))
model.summary()
model.compile( optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])
results = model.fit( train_x, train_y, epochs = 100, batch_size = 10, validation_data = (test_x, test_y))
# Save model
mobilenet_save_path = "my_SavedModel"
tf.saved_model.save(model, mobilenet_save_path)
# Convert the SavedModel
converter = tf.lite.TFLiteConverter.from_saved_model("my_SavedModel") # path to the SavedModel directory
tflite_model = converter.convert()
# Save the tflite_model.
with open('modelSavedModel.tflite', 'wb') as f:
f.write(tflite_model)
This is my Java code in Android
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(48);
byteBuffer.putFloat(valueInt1);
byteBuffer.putFloat(valueInt2);
byteBuffer.putFloat(valueInt3);
byteBuffer.putFloat(valueInt4);
byteBuffer.putFloat(valueInt5);
byteBuffer.putFloat(valueInt6);
byteBuffer.putFloat(valueInt7);
byteBuffer.putFloat(valueInt8);
byteBuffer.putFloat(valueInt9);
byteBuffer.putFloat(valueInt10);
byteBuffer.putFloat(valueInt11);
byteBuffer.putFloat(valueInt12);
try {
ModelSavedModel model = ModelSavedModel.newInstance(context);
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 12}, DataType.FLOAT32);
inputFeature0.loadBuffer(byteBuffer);
// Runs model inference and gets result.
ModelSavedModel.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
// Releases model resources if no longer used.
model.close();
float preResult = outputFeature0.getFloatArray()[0]*100;
int result = (int) preResult;
System.out.println(preResult);
} catch (IOException e) {
// TODO Handle the exception
}