I converted my model from a tensorflow model to tensorflow lite. I followed this Post-training integer quantization | TensorFlow Lite tutorial to perform full integer post training quantization. Furthermore, I use tfliteinterpreter to perform inference. In this process, I am wondering if it is possible to view the weights of the model after I performed post training quantization?
Hi @r8a, After quantization you can get the quantize weights
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
interpreter.allocate_tensors()
input_details = interpreter.get_tensor_details()
interpreter.tensor(input_details[9]['index'])()
By changing the index position you will get weights for the different layers. please refer to this gist for working code example. Thank You.
Hi @Kiran_Sai_Ramineni, thank you so much! You’re a star.