How to use tf.distribute.Strategy to distribute training?

Some people have received this error, which might have the fix you are looking for:

WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead
currently. We will be working on improving this in the future, but for now
please wrap `call_for_each_replica` or `experimental_run` or `run` inside a
tf.function to get the best performance.