I have a custom function of Relative Mean Squared error which works fine for a single GPU. Now I want to upgrade my loss function so that I can use tf.distribute.Strategy. I don’t understand what specific changes do I have to make my custom loss function work as it worked for a single GPU. Please help me!
class RelMSE(keras.losses.Loss):
def __init__(self, denom_nonzero=1e-5, **kwargs):
super().__init__(**kwargs)
self.denom_nonzero = denom_nonzero
def call(self, y_true, y_pred):
# Compute the MSE of each example
mse = tf.reduce_mean(tf.square(y_pred - y_true), axis=-1)
# Compute the mean of squares of the true values
true_norm = tf.reduce_mean(tf.square(y_true), axis=-1)
# Ensure there are no 'zero' values in the denominator before division
true_norm += self.denom_nonzero
# Compute relative MSE of each example
err = tf.truediv(mse, true_norm)
# Compute mean over batch
err = tf.reduce_mean(err, axis=-1)
# Return the error
return err
Hi @Kanav_Rana, within the strategy scope you have to compile the model with your custom loss function.
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
#define your model
model.compile(optimizer='adam', loss=RelMSE())
Thank You.
Tensorflow documentation states that in order to make use of a custom loss function, the user needs to do the reduction themselves instead of auto reduction. So, when I directly use strategy.scope(), I get:
ValueError: Please use
tf.keras.losses.Reduction.SUMor
tf.keras.losses.Reduction.NONEfor loss reduction when losses are used with
tf.distribute.Strategyoutside of the built-in training loops. You can implement
tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE using global batch size like: ``` with strategy.scope(): loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE) .... loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size) ``` Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
Hi @Kanav_Rana, Could you please try by adding these code lines
def __init__(self, denom_nonzero=1e-5, **kwargs,
reduction=keras.losses.Reduction.AUTO, name='rel'):
super().__init__(reduction=reduction, name=name,**kwargs)
instead of
def __init__(self, denom_nonzero=1e-5, **kwargs):
super().__init__(**kwargs)
Thank You.
Hi @Kiran_Sai_Ramineni, the following code does not work as mirrored strategy does not support reduction=keras.losses.Reduction.AUTO
, it either supports reduction=keras.losses.Reduction.NONE
or reduction=keras.losses.Reduction.SUM