Hello,
I found out that jit_compile=True makes the model run much faster. At the same time the MirroredStrategy is a good way to process the data faster. When I want to combine the two approaches with 2 GPU or more , it is not working fine. Why this is the case? Is it a bug? Can it be avoided?
I have this error message: “UnimplementedError: We failed to lift variable creations out of this tf.function, so this tf.function cannot be run on XLA. A possible workaround is to move variable creation outside of the XLA compiled function.”
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = …
model.compile(loss=..,
optimizer=...,
jit_compile=True)
model.fit(train_dataset, epochs=10)
I am using tf 2.15 with keras 3.0.4