To perform inference on tflite model one has to write this code,
import numpy as np
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path=“converted_model.tflite”)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0][‘shape’]
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0][‘index’], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0][‘index’])
print(output_data)
To run inference on regular tensorflow model, you will just do,
tfmodel.predict(test_dataset)
My question is why tflite api is not made simple and consistent with regular tensorflow model api? If they were same it would be very easy to remember and work with.
Also I was not able to find a function to evaluate tflite performance on test dataset, i.e.
tflite_model.evaluate(test_ds)
Am I missing something or there exist this method may be with a different name?