Hi, Rahul.
Thank you for your response.
I am using LSTM model from Keras:
model = tf.keras.Sequential([
LSTM(units=50, input_shape=(sequence_length, nb_features), return_sequences = True, unroll = True),
Dropout(0.2),
LSTM(units=25, return_sequences = False, unroll = True),
Dropout(0.2),
Flatten(),
Dense(units=nb_out, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
history = model.fit(seq_array, label_array, epochs=100, batch_size=200, validation_split=0.05, verbose=2,
callbacks=[keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=0, mode='min'),
keras.callbacks.ModelCheckpoint(model_path, monitor='val_loss', save_best_only=True, mode='min', verbose=0)]
)
print(history.history.keys())
Then I am using this to convert to tflite and apply optimization:
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(seq_array_test_last).batch(1).take(90):
yield [input_value]
model = tf.keras.models.load_model(tflite_model_path, compile=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflm_opt_model = converter.convert()
with tf.io.gfile.GFile(tflite_opt_model_path, 'wb') as f: # Output destination
f.write(tflm_opt_model)
I will try with int8 (instead of uint8) for input/output as well.
I am using TF 2.19 and latest TFLM.
Sorry for not posting the complete script, I am not sure how much of it I can disclose at the moment.
Output of the conversion you can see in my original post.
In mean time I’ve tried Model Analyzer API, as you suggested. I’ve run it alongside the conversion without optimization (for same model) for comparison.
The Analyzer log is quit long, this is just a small snippet at the very end: