Multiple precision in tflite quantization not supported int16 for input

Trying multiple precision on tflite quantization like input as int16, matrix weight as int8 and bias weight as int32 and output as int8.
Unfortunately, input precision not able to make it as int16 unlike matrix weight, bias weight and output.
Is there any option or customization flags to make input as int16?

Hi @Rajesh_Shanmugam,

TFLite currently doesnot support int16 for input data during quantization. I can confirm that the conversion fails for input int32, int16 activation and int8 weights . Here is the gist. If you could elaborate your use case, the team will verify the usability of it and work on it.

Thank You