How to correctly do TFLite fully integer quantization of DTLN (Dual-signal Transformation LSTM Network)?

Hi, everyone here,
I was trying to do TFLite post training quantization practice as breizhn/DTLN.
The author did default quantization(float32) and separate as 2 stage of TFlite model as TF2.3 not support complex value well.
Then I try to use TF2.15 and load his saved model from ‘DTLN/pretrained_model/dtln_saved_model’ and try to do Full Integer quantization. The error shows as below I am not sure how to fix.

During searching for help, I found there’s someone did the work and shares the Modified by moderatorhttps://github.com , TFLite model.
The model graph is more simple and concise compare to the original 2 stage TFLite models, while the auther nyadla-sys didn’t talk about how to do this or share python script.

I am wondering how he made it, I am not sure if this work needs other skill since I only knew the basic converter steps. Perhaps it’s simple just I have no idea how to make it.
Hope someone can help here, give more clear steps or even script to help me understand how to merge the 2 stage into 1 model.
Thanks.

Hi @saraphinesER, Could you please share the standalone code to reproduce the issue. Thank You.

Hi, @Kiran_Sai_Ramineni, really sorry for the late reply, I was busy on something else for 2 weeks and almost have no chance to check news here. Hope this wouldn’t make you thought I am just a question shooter or hit-and-run poster.
Here is my script to do the post training quantization with full Integer, breizhn_DTLN_PTQ_modify.py
using the model ‘saved_model.pb’ as input. Then will get the error as the one in my first post.

Please give advise or comments in any.
Thanks.

Hi @saraphinesER, Could you please confirm if you are trying to quantize the model that was saved as a savaed_model.pd file. If yes could you please let me know the Tensorflow version you are using to save the model. Because I am not able to load that model in 2.17. Thank You.

Hi, @Kiran_Sai_Ramineni ,
Yes, I am trying to quantize the model from a “saved_model.pb” file by the
“tf.lite.TFLiteConverter.from_saved_model( … )” API.
I was using TF 2.15.0 in both local and Colab and no error when loading the model.
I don’t have a chance to try 2.17.0 after Colab upgrade to 2.17.0 on 2024-08-20.
As I knew TF has a big change on 2.16, which is taking Keras 3 as default from Keras 2, I am not sure if this has something to do with the saved_model…

Hi @saraphinesER, I am not able to load the model from the saved_model.pd file, because when you save the model using model.save() there are also other files created which have model weights and meta data to load the model. Could please provide all the files to load the model. Thank You.

Hi, @Kiran_Sai_Ramineni,
Sorry for not describe well, actually other information are saved in same layer of the pb model as “variables” folder.
image
So, to load the ‘saved_model.pb ’ it should write as below

model = load_model(“.\breizhn_DTLN\pretrained_model\dtln_saved_model”, compile=False)

I also load the model in the same way in my script.
breizhn_DTLN_PTQ_modify.py

Thanks for your patient.