Densify op not implemented in TF Lite Micro

Hello,
I have trained and pruned a model in TF to deploy it to a Raspberry Pi Pico. So, I have converted that model to TF Lite using this optimization “converter.optimizations = [tf.lite.Optimize.EXPERIMENTAL_SPARSITY]” (as it shown in Pruning for on-device inference w/ XNNPACK | TensorFlow Model Optimization). The problem is that this optimization includes an operation called Densify, which I believe it is not implemented in TF Lite Micro. Note that pruned models important with this kind of devices that have very limited resources.

This is the model: (300,100,10)

Has anybody experienced this issue? Is there any solution?

Hi @Lluc_Crespi,

Currently, densify op is not implemented in TF Lite Micro. Alternatively , check with the post training integer only quantization for deploying on your device.

Thank You

Hi all,
Is it sensible to port ‘densify’ operation from TFLite to TFLite-micro?
(Is it sensible to use post-training pruning for deployment on MCU?)

If I understood it properly pruning is optimization for the size of the model (smaller model image), it doesn’t improve inference time?