Loss function stops working when y_pred is modified

I am trying to create a custom loss function but as soon as I try to create a copy of the y_pred (model predictions) tensor, the loss function stops working.

This function is working

def custom_loss(y_true, y_pred):
    y_true = tf.cast(y_true, dtype=y_pred.dtype)

    loss = binary_crossentropy(y_true, y_pred)

    return loss

The output is

Epoch 1/10
 26/26 [==============================] - 5s 169ms/step - loss: 56.1577 - accuracy: 0.7867 - val_loss: 14.7032 - val_accuracy: 0.9185
 Epoch 2/10
 26/26 [==============================] - 4s 159ms/step - loss: 18.6890 - accuracy: 0.8762 - val_loss: 9.4140 - val_accuracy: 0.9185
 Epoch 3/10
 26/26 [==============================] - 4s 158ms/step - loss: 13.7425 - accuracy: 0.8437 - val_loss: 7.7499 - val_accuracy: 0.9185
 Epoch 4/10
 26/26 [==============================] - 4s 159ms/step - loss: 10.5267 - accuracy: 0.8510 - val_loss: 6.1037 - val_accuracy: 0.9185
 Epoch 5/10
 26/26 [==============================] - 4s 160ms/step - loss: 7.5695 - accuracy: 0.8544 - val_loss: 3.9937 - val_accuracy: 0.9185
 Epoch 6/10
 26/26 [==============================] - 4s 159ms/step - loss: 5.1320 - accuracy: 0.8538 - val_loss: 2.6940 - val_accuracy: 0.9185
 Epoch 7/10
 26/26 [==============================] - 4s 160ms/step - loss: 3.3265 - accuracy: 0.8557 - val_loss: 1.6613 - val_accuracy: 0.9185
 Epoch 8/10
 26/26 [==============================] - 4s 160ms/step - loss: 2.1421 - accuracy: 0.8538 - val_loss: 1.0443 - val_accuracy: 0.9185
 Epoch 9/10
 26/26 [==============================] - 4s 160ms/step - loss: 1.3384 - accuracy: 0.8601 - val_loss: 0.5159 - val_accuracy: 0.9184
 Epoch 10/10
 26/26 [==============================] - 4s 173ms/step - loss: 0.6041 - accuracy: 0.8895 - val_loss: 0.3164 - val_accuracy: 0.9185
 testing
 **********Testing model**********
 training AUC : 0.6204090733263475
 testing AUC: 0.6196677312833667

But this is not working

def custom_loss(y_true, y_pred):
    y_true = tf.cast(y_true, dtype=y_pred.dtype)

    y_p = tf.identity(y_pred)

    loss = binary_crossentropy(y_true, y_p)

    return loss

I am getting this output

Epoch 1/10
26/26 [==============================] - 11s 179ms/step - loss: 1.3587 - accuracy: 0.9106 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 2/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 3/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 4/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 5/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 6/10
26/26 [==============================] - 4s 158ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 7/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 8/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 9/10
26/26 [==============================] - 4s 160ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
Epoch 10/10
26/26 [==============================] - 4s 159ms/step - loss: 1.2572 - accuracy: 0.9185 - val_loss: 1.2569 - val_accuracy: 0.9185
testing
**********Testing model**********
training AUC : 0.5
testing AUC : 0.5

Is there a problem with tf.identity() which is causing the issue?
Or is there any other way to copy tensors which I should be using?

Hi @Athreya_Prakash, I have tried to calculate binary crossentropy with sample data with y_pred and tf.tidentify(y_pred) and got the same result as output. please refer to this gist for working code example. Could please try with the latest version of tensorflow and let us know if the issue still persists. Thank You.