@Sayak_Paul and I have been covering the deployment aspects of (TensorFlow) vision models from Transformers, and we’re delighted to announce the final post on that series today!
In this post, we cover deployment with Vertex AI. You’ll learn how to deploy a ViT B/16 following best practices, consuming the deployed endpoint in different forms, conducting load-tests, and, more importantly, the pricing around the deployment.
When we started this series, there was a dearth of resources showing how to deploy TF models from Transformers to GCP and TF ecosystem following good practices. We wanted to close that gap
Here’s the latest post: Deploying ViT on Vertex AI
For those of who are interested in the last two posts
Thanks to all the reviewers from : @merve , @osanseviero, João Gante, Matthew Carrigan, and Steven Liu! Thanks to the ML Developer Programs team at Google (Soonson Kwon) for all the support around GCP credits.