While there is a great ground to cover in “distributed training with TPUs” I have written down a blog post which helps anyone to being with.
My latest PyImageSearch blog post covers the following details:
- Hardwares used for DL (CPUs, GPUs and TPUs).
- An efficient data pipeline to use TPUs (using
tf.data
). - A primer on distributed training.
Link: Fast Neural Network Training with Distributed Training and Google TPUs - PyImageSearch