Tensorflow Lite Quantization LSTM running on Tensorflow lite micro

Hello,
i try to execute a LSTM Model on a cortex m7.
To improve the performance i want to quantisize the Model.
The Problem is, that the tensorflow lite Micro Kernel only supports int_16 cell states.
Is there a possibilty to convert and quantizise the Model whit int_16 cell states?