I’ve seen the talk called Inside TensorFlow: tf.distribute.Strategy, it’s in 2019.
Back to that time, TF teams has added the model parallelism ability in their TODO list, and now, it’s at the late Q3 of 2021, what is the dev progress? Is their any general progress of how to easily design a model parallel strategy using TF?
May I know if there is a RFC or plans for what kind of model parallelism will you support?
For example, https://huggingface.co/transformers/master/parallelism.html has described several kinds of model parallelism strategy, would you plan to support these kind of parallelism?