I see in the graph mode tutorial that all the outputs from the function decorated by @tf.function
should be returned and directly modifying global variables is bad, but in the custom training tutorial the training accuracy metric object’s update state (train_acc_metric.update_state()
), optimizer’s apply_gradients(optimizer.apply_gradients()
), the loss function and even the model are neither being passed as a parameters nor being returned as mentioned in the other tutorial, but resulting in modification of values outside the decorated function. Can someone please explain why it is acceptable to change values in outer scope with direct access without returning, in this case?
Hi @lakshmi_poda,
Thanks for using the forum.
As far as I know, some of TF objects like metrics, optimizers,loss functions were allowed to modify their values in @tf.function because it’s a part of training a model(e.g. updating metrics, applying gradients).Obviously, values of metrics and gradients needs to be mutable and should be acceptable to change values in outer scope for further use. Unlike regular variables it has been designed purposely in graph mode.
Hope this helps.Thank You.