Should model.compile be called inside or outside the strategy.scope() using tf.distribute

A quick question to TensorFlow distributed training, should model.compile be called inside or outside strategy.scope() ?

Both seem to work, on my single Accelerator machine :thinking_face: Does it make a difference ?

Inside the scope:

strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0"], cross_device_ops=tf.distribute.NcclAllReduce())
with strategy.scope():
    model = …
    model.compile(loss='mae', optimizer='sgd')

Outside the scope:

with strategy.scope():
    model = …

model.compile(loss='mae', optimizer='sgd')

Hi @Yingding ,

Yes, it makes a difference. When you call model.compile outside of the strategy.scope() , the model will be created on the default device, which is usually the CPU. When you call model.compile inside of the strategy.scope() , the model will be created on all of the devices that are available to the strategy.
In your case, you are using a single accelerator machine, so there is no difference between calling model.compile inside or outside of the strategy.scope().

It is generally recommended to call model.compile inside of the strategy.scope() to ensure that the model is created on all of the available devices. This will improve the performance of the model, especially when you are using multiple devices.

I hope this helps!

Thanks.

@Laxma_Reddy_Patlolla Thanks so much for all the great details.

I was confused by the official tensorflow distributed training guide, which has a mistake in code آموزش توزیع شده با TensorFlow  |  TensorFlow Core

So, does it matter if the model.fit goes inside strategy.scope? (as model.compile is already in strategy.scope, does it affect model.fit)