The issue you’re encountering with the TensorFlow Lite model is related to a mismatch in the dimensions of the input tensor. Your audio_windowed tensor has the shape (11, 43844, 1), but the expected input shape for your model is [1, 43844, 1]. This indicates that your model is expecting a single instance with dimensions 43844 x 1, but you are providing 11 instances.

Here’s how you can resolve this:

Reshape Input Tensor: You need to reshape your input tensor to match the model’s expected input shape. Since your model expects a single instance, you should select one instance from your audio_windowed tensor or modify your data preparation pipeline to produce a single instance with the correct shape.

Batch Processing: If your intention is to process all 11 instances, you need to do it one at a time (since your model’s input shape suggests it can only handle one instance at a time). You can loop through your instances and process them individually.

Here’s a revised version of your code snippet to handle one instance at a time:

pythonCopy code

# Load the quantized model
interpreter = tf.lite.Interpreter(model_path='quantized_model.tflite')
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
input_index = input_details[0]["index"]
# Process each instance in audio_windowed
for i in range(audio_windowed.shape[0]):
input_tensor = audio_windowed[i].numpy().reshape(input_details[0]['shape'])
interpreter.set_tensor(input_index, input_tensor)
interpreter.invoke()
# get the output and process it as needed
output_data = interpreter.get_tensor(output_details[0]['index'])
# ... process output_data ...

In this code, audio_windowed[i].numpy().reshape(input_details[0]['shape']) reshapes each instance to the required input shape of the model. This assumes that each instance in audio_windowed can be reshaped to [1, 43844, 1]. If that’s not the case, you’ll need to adjust your data preparation process accordingly.

Thanks for your solution, I will test it within the next days.
The non-quantized model runs on an input of shape (11, 43844, 1), thus I would expect the quantized model to run on the same type of input, I really do not understand why the input of the model is resized.