Hello
I am getting the following error when trying to do quantization aware training with tensorflow 2.7:
ValueError: `to_quantize` can only either be a tf.keras Sequential or Functional model.
The error occurs when calling this method:
quantize_model = tfmot.quantization.keras.quantize_model(model)
The model is defined below. I suppose the reason is that subclassed models are not supported? I have already trained(normal training, not QAT) multiple models with the definition below. Post-training quantization works, but i would like to try quantization aware training to see if it improves performance. Is there a way to be able to do quantization aware training with the model below, or alternatively define it in another way and redo normal training.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Flatten, Activation, Dropout
from tensorflow.keras.models import Model
class Mobilenet_v2_transfer(Model):
def __init__(self):
super(Mobilenet_v2_transfer, self).__init__()
self.base = tf.keras.applications.mobilenet_v2.MobileNetV2(
input_shape=(224, 224, 3), alpha=1.0, include_top=False, weights='imagenet',
pooling='avg')
self.base.trainable = True
for layer in self.base.layers[:130]:
layer.trainable = False
self.flatten = Flatten()
self.dense = Dense(1, kernel_regularizer=tf.keras.regularizers.L2(0.01)
self.sigmoid = Activation('sigmoid')
def call(self, x):
x = self.base(x)
x = self.flatten(x)
x = self.dense(x)
x = self.sigmoid(x)
return x