Hey all! We are rolling out Gemini 3 Flash, it is more capable than even 2.5 Pro on many benchmarks, and much more cost effective (and has a small free tier available for now in both the UI and API): Build with Gemini 3 Flash: frontier intelligence that scales with you
Excellent, is there plans to bring this model into Antigravity?
Thanks for your update here in the community, Logan! Much appreciated. ![]()
The model is great so far. Feel like Gemini 3 Pro is still better at UI but not complaining since the price is unbelievably good.
huge billing/token discrepancy using gemini 3 flash in place of gemini 3 pro on a api key for the same function. I was averaging $2-6 cost using gemini 3 pro, just testing the gemini 3 flash same usage hit 7.7 million tokens, for $25 cost yesterday. you can see the spike even though same use.
I have been testing the new Gemini 3 Flash model in Google AI Studio this week. The transition to this new model provided a significant, pleasant surprise to my workflow. It revealed a significant increase in code generation velocity. While Gemini 3 Pro required noticeable processing time, Gemini 3 Flash eliminates much of that latency.
Moreover, this reduction in latency did not compromise the logic or accuracy of the generated code. Gemini 3 Flash maintains a high level of reasoning. When a tool responds this quickly, it stops feeling like a separate utility and starts feeling like an extension of the developer’s own thought process.
High-speed generation is no longer a luxury; it is the new baseline for our productivity.
So far the model is much better. There is still recitation error when I try to extract text from my own document (temperature 0) for deterministic result.
I’m getting 429 error due to quota limits of the gemini-3-flash-preview. When will be the gemini-3-flash available? I tried removing the -preview in my API call but, it seems the model is not yet fully deployed.
Following up on @Alfie’s comment, is there a way to “subscribe” for a notification when gemini-3-flash-preview is bumped to gemeni-3-flash? Using gemini-flash-latest puts me on a 2.5 version, I’d rather stick with 3, if there’s a codename for that?
Very good sneaky breaking changes on the 23rd, you let the api developers suffer as much as you can, that will maximize adoptions…… in deeed, yes…. this is model is more capable when it starts producing json text parts over night without prior warning mr breaking the one and only pure java gemini client GitHub - anahata-os/gemini-java-client: A "pure java" gemini-cli implementation with a Swing JPanel to integrate a Gemini chat into any java application or to can be run standalone as a desktop app with a gui.
There is no documentation for breaking API changes, so maybe….. gemini doesnt like documentation
Been using and playing with Gemini 3 flash and all I can say is wow! Coding capabilities and speed are truly amazing gets complex tasks done faster sometimes even better than Claude Sonnet 4.5 ordinary version not thinking one. Truly great work guys.
Will it get support for oneOf in the function call json schema?
Hello @Aksel_Skaar_Leirvaag, thank you for reaching out. Could you please share more details regarding the specific functionality or support you are looking for?
Features such as function calling and JSON schema should work as expected in the gemini-3-flash-preview model. If you are encountering issues with these features, please let us know so we can assist you further.
