How to read Batchnorm layer's parameters in TF2.x

“Not a contribution”

We are developing our project for TF2.x with eager execution disabled.

In TF1.x we can read Batch normalization layer’s parameters such as epsilon, momentum using graph structures / graph APIs. How could we achieve same in TF 2.x without the use of high-level keras APIs?

More Details:
We have defined a default keras BN layer in our model as below:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
inp = tf.random.normal((4, 2, 2, 3))
out = tf.keras.layers.BatchNormalization()(inp)
session = tf.compat.v1.Session()
op = session.graph.get_operation_by_name(‘batch_normalization’) # op.type == ‘FusedBatchNormV3’’

In TF2.x, there is no corresponding primitive operation (e.g. ‘FusedBatchNormV3’ type) but it is treated as function call and we only end up with ‘Identity’ type of operations.


1 Like

Hi @Hitarth_Mehta ,

In TensorFlow 2.x, layers are objects, and their parameters can be accessed as attributes of these objects.

import tensorflow as tf
tf.compat.v1.disable_eager_execution()

# Define the batch normalization layer
bn_layer = tf.keras.layers.BatchNormalization()

# Access parameters
epsilon_value = bn_layer.epsilon
momentum_value = bn_layer.momentum

I hope this helps you.

Thanks