Hello,
I am currently working with the Gemma family of models and wanted to inquire specifically about the availability of a Quantization-Aware Training (QAT) version of the Gemma-27B model.
Could you please confirm whether a QAT-trained or QAT-ready variant of Gemma-27B exists? If so, I would greatly appreciate any documentation, technical details, or guidance on accessing and deploying it.
Additionally, because we intend to use the model for commercial purposes, could you please clarify under what license the QAT model (or its weights) is provided?
Specifically, we are looking for usage under MIT, Apache-2.0, or a similarly permissive license to ensure compatibility with our use case.
Thank you for your time and assistance. I look forward to your response.