Hi @Laxma_Reddy_Patlolla
After doing some more research, I find out that both data_augmentation and preprocessing step are being triggered, which means that model(images) will activate each layer of the model whereas model.predict(images) will only activate those which are supposed to be active during inference. So, model.predict is the correct way to get predictions during inference if data_augmentation & preprocessing are integrated as model layers as stated in TF Learn (Transfer learning and fine-tuning | TensorFlow Core)
Reasoning
model(images) gives different result on each run (which means data_augmentation is also active)
model(tf.expand_dims(img_tensors, 0))
## Output:
<tf.Tensor: shape=(1, 7), dtype=float32, numpy=
array([[0.01726468, 0.02960926, 0.89761806, 0.01863422, 0.00520058,
0.01019412, 0.02147901]], dtype=float32)>
model(tf.expand_dims(img_tensors, 0))
## Output:
<tf.Tensor: shape=(1, 7), dtype=float32, numpy=
array([[0.03177946, 0.01744751, 0.88826525, 0.01340565, 0.00696777,
0.0177426 , 0.02439182]], dtype=float32)>
whereas model.predict is always the same
model.predict(tf.expand_dims(img_tensors, 0))
## Output:
array([[0.11547963, 0.04055786, 0.67642653, 0.02366884, 0.02861447,
0.07121295, 0.04403977]], dtype=float32)
model.predict(tf.expand_dims(img_tensors, 0))
## Output:
array([[0.11547963, 0.04055786, 0.67642653, 0.02366884, 0.02861447,
0.07121295, 0.04403977]], dtype=float32)
I also found out the results based only on actual model (last layer: EfficientNetV2S). Here model(images) and model.predict both results are same and also equal to model.predict in above block
model.layers[2](tf.expand_dims(img_tensors, 0))
## Output:
<tf.Tensor: shape=(1, 7), dtype=float32, numpy=
array([[0.11547963, 0.04055786, 0.67642653, 0.02366884, 0.02861447,
0.07121295, 0.04403977]], dtype=float32)>
model.layers[2].predict(tf.expand_dims(img_tensors, 0))
## Output:
array([[0.11547963, 0.04055786, 0.67642653, 0.02366884, 0.02861447,
0.07121295, 0.04403977]], dtype=float32)
Conclusion
model(images) and model.predict(images) are not the same and hence works differently
Thanks