I’m exploring the process of converting TensorFlow models to TensorFlow Lite for on-device training on Android mobile phones. My conversion process looks like this:
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_DIR)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.experimental_enable_resource_variables = True
tflite_model = converter.convert()
My primary concern revolves around the stability and consistency of training results when using the transformed TFLite model on an Android device. Specifically, I’m wondering if training the same model with equivalent data on-device will yield significantly different results compared to its original TensorFlow counterpart. Issues like training instability or notable variances in performance metrics are my main focus.
It’s understood that some discrepancies might arise due to the inherent differences between TensorFlow and TensorFlow Lite environments. However, I’m curious about the extent to which these discrepancies might manifest, especially in the context of on-device training.
Does anyone here have experience or insights regarding the stability and consistency of training results with converted TensorFlow Lite models on mobile devices? Any shared knowledge or tips on ensuring more reliable outcomes in such scenarios would be greatly appreciated.