How to normalize output tensor to have consistent results?

If I perform classification loading model files in Tensorflow, the output tensor values have ranges that are very different according the model used.

Getting the maximum and minimum values from output tensor itself to normalize would give inconsistent results because in this way the maximum and minimum used for normalization would be dependent on the current classified object results rather than the model range values. So, I would get values in range 0, 1 that are not comparable from a classification to another.

Is there a way to determine what are the possible maximum and minimum values according model so that values can be consistently normalized in [0, 1] range?

Hi @AndrewFar ,

Yes, there is a way to determine the possible maximum and minimum values according to the model so that the values can be consistently normalized in the [0, 1] range.

One way to do this is to use the get_weights() method of the model. This method will return a dictionary of the weights of the model. The weights of the output layer will be the parameters that determine the range of the output values.

Once you have the weights of the output layer, you can calculate the minimum and maximum values of the output tensor by taking the minimum and maximum values of the weights.

normalized_values = (model.predict(x) - min_value) / (max_value - min_value)

I hope this helps!

Thanks.

I load model with model = tf.saved_model.load(“/path/to/model”) and I get the error

object has no attribute ‘get_weights’

.