Describe the expected behaviour
The Global Policy I set in the previous cell was mixed_float16. The problem works fine when running on tensorflow 2.4.1 so the bug is with tensorflow 2.5.0
You can reproduce the same error using the below notebook :
For some reason, I can’t include images or links in this reply.
The above screenshots are enough to get the gist of the problem, but if still confused, please check out the GitHub Repo/Issues of Tensorflow. I’ve reported the same issue there too.
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
# Create base model
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False # freeze base model layers
# Create Functional model
inputs = layers.Input(shape=input_shape, name="input_layer")
# Note: EfficientNetBX models have rescaling built-in but if your model didn't you could have a layer like below
# x = preprocessing.Rescaling(1./255)(x)
x = base_model(inputs, training=False) # set base_model to inference mode only
x = layers.GlobalAveragePooling2D(name="pooling_layer")(x)
x = layers.Dense(len(class_names))(x) # want one output neuron per class
# Separate activation of output layer so we can output float32 activations
outputs = layers.Activation("softmax")(x)
model = tf.keras.Model(inputs, outputs)
# Compile the model
model.compile(loss="sparse_categorical_crossentropy", # Use sparse_categorical_crossentropy when labels are *not* one-hot
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
model.summary()
I was working with mixed_precision earlier today seems things were working smoothly but not sure when I tried to run the same block of code it throws an error.
So does it mean there is an issue with with EfficientNetB0 model? Because just now I built an Resnet101 model and performed mixed_precision and it works fine.