My network makes use of 3 losses that are applied to different layers. This is what I am doing currently :
trainable_vars = self.trainable_variables
gradients = tape.gradient([loss1,loss2,loss3], trainable_vars)
I am hoping that the gradients get applied according to the layer outputs that were involved in the calculation of each loss. However, how do I verify this?
Is there any means to check which loss/ gradient each layer in the network is getting affedcted by during back propagation without the need for custom layers?
Hi @anxious_learner ,
This comprehensive example will help you Visualize and Analyze how different losses affect various layers in your network.
Key features of this implementation include:
- A custom TensorFlow model with four dense layers.
- Separate calculation of gradients for each loss function.
- Visualization of gradient distributions using histograms.
- Computation and display of gradient statistics (mean and standard deviation) for each layer.
I excuted it with dummy model , I am attaching gist for the same kindly refer this .
Hope this helps ,
Thank You !