[Model Launch] Welcome Google’s Gemma!
Recently, Google announced a new family of open large language models called Gemma. It was built from the same research and technology used to create Gemini. We worked hard behind the scenes to support this launch and make it really simple for you to get started with Gemma and begin publishing your own fine-tuned variations. There’s a few ways to start, keep reading to learn more!
Use Gemma with popular frameworks and an ever-growing library of code samples and starter notebooks. Here are the key details to know:
- We’re releasing model weights in two sizes: Gemma 2B and Gemma 7B. Each size is released with pre-trained and instruction-tuned variants.
- A new Responsible Generative AI Toolkit provides guidance and essential tools for creating safer AI applications with Gemma.
- We’re providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks: JAX, PyTorch, and TensorFlow through native Keras 3.0.
- Ready-to-use Colab and Kaggle notebooks, alongside integration with popular tools such as Hugging Face, MaxText, NVIDIA NeMo and TensorRT-LLM, make it easy to get started with Gemma.
- Pre-trained and instruction-tuned Gemma models can run on your laptop, workstation, or Google Cloud with easy deployment on Vertex AI and Google Kubernetes Engine (GKE).
- Optimization across multiple AI hardware platforms ensures industry-leading performance, including NVIDIA GPUs and Google Cloud TPUs.
- Terms of use permit responsible commercial usage and distribution for all organizations, regardless of size.
We can’t wait to see what you build! We know that this community has the skills and curiosity to openly stress test Gemma and share impressive fine-tuned variants that accelerate ML innovation even further.
If you’re near London, join our in-person event at Google UK’s offices to learn how to use Gemma models and build the future!
Are you interested in chatting with Googlers and other developers using Gemma? Join the Google Developers Community Discord server! This is a great place to interact with developers, and to learn, share, and support each other.
You can find us in the gemma channel in the “All things AI” section. See you there!
Based off of: