Quantization Aware Training using tensorflow_model_optimization

Hello,
I am trying to quantize a model using QAT with tfmot. When I apply the function

tfmot.quantization.keras.quantize_model(keras_model), I get the following error

ValueError: Unable to clone model. This generally happens if you used custom Keras layers or objects in your model. Please specify them via quantize_scope for your calls to quantize_model and quantize_apply. [Unknown constraint: WeightClip].

The original Keras model uses a custom WeightClip as a parameter for applying weight clipping

Following is the implementation of WeightClip:

image

How can I make tfmot recognize the custom object so that it can apply a quantization wrapper on the model? Please let me know if anyone has faced this too.

Thanks in Advance

1 Like

Hi @Swaraj_Badhei, To make tfmot recognize custom layers or objects, you need to specify them via quantize_scope.

custom_objects={"weight":weightclip}
with tfmot.quantization.keras.quantize_scope(custom_objects):

For more details you can refer to this documentation. Thank You.

2 Likes

Google Colab I tried following Google research example to apply QAT for Mobilenet but getting this same error,

“ValueError: Layer <tf_keras.src.layers.convolutional.conv2d.Conv2D object at 0x799329e4a530> supplied to wrapper is not a supported layer type. Please ensure wrapped layer is a valid Keras layer.”"

following Unable to clone model. This generally happens if you used custom Keras layers or objects in your model. Please specify them via quantize_scope for your calls to quantize_model and quantize_apply.

Please let me know if anyone has faced this too.

I really appreciate any help you can provide.

I’ve the same problem with Conv2D layers, did you solved it ?

Try to reinstall TFMOT or use another conda env @Lisa