Parallelising model with multiple inputs

I have a model with 5 inputs, and 5 outputs. Each output has its own loss function, but Keras is minimising the sum of the individual losses (this is the default behaviour I think).

What’s the best way to parallelise training here? By default Keras will train each part of the model sequentially I think. But I’m interested in the best way to train the various parts of the model: a) across multiple processes, on a single GPU, and b) across multiple GPUs.

Hi @Yamyamyam Welcome to the Tensorflow Forum ,

You Can Follow These Approchs ,

  1. Single GPU Optimization:
  • Use tf.data.Dataset with num_parallel_calls and prefetch to optimize data loading and preprocessing.
  • Ensure your model architecture is optimized to fully utilize the GPU.
  1. Multiple GPUs:
  • Use tf.distribute.MirroredStrategy to distribute the model and computation across multiple GPUs.
  • Define your model and compile it within the strategy.scope() to ensure TensorFlow distributes the operations correctly.

You can efficiently parallelize the training of your model with multiple inputs and outputs, leveraging either a single GPU or multiple GPUs.

Thank You !

@Aniket_Dubey Did you read the question?

Hi @Yamyamyam , Sorry for the misunderstanding. Correct me if my understanding is correct or not like you want to train the multi input model on multiple GPUs ?