We can calculate weighted loss by defining a custom loss like this:
custom_loss = l_1 * loss1 + l_2 * loss2 + …. + l_N * lossN
I wanted to know that, should the constraint for l_i’s be
l_1 + l_2 + … + l_N = 1
or it could be greater than 1?
What are the pros and cons for choosing a value larger than 1?
A weight constraint is an update to the network that checks the size of the weights, and if the size exceeds a predefined limit, the weights are rescaled so that their size is below the limit or between a range.
You are talking about L1 and L2 weight regularization. I was talking about weighted loss
I wanted to know about the design of loss function as shown in this post (for example) python - Keras/Tensorflow: Combined Loss function for single output - Stack Overflow