We wanted to give folks a heads up that Gemini 2.5 Pro Preview 05-06 is scheduled for deprecation on June 19, 2025, given that we just launched an updated preview version Gemini 2.5 Pro Preview 06-05 (Gemini 2.5 Pro Preview in AIS) earlier this week, and will be GAing in a matter of weeks.
We recommend switching to the updated model which shows improvements across the board on key benchmarks for coding, science, multimodal understanding and reasoning. We have also improved creative writing, style and structure based on feedback from our previous releases.
Please let us know if you have any issues or concerns with switching to the updated Pro model by June 19, 2025.
These benchmarks are misleading, inflated! Claude Sonnet 4 (Extended Thinking) consistently corrects content generated by Gemini 2.5 Pro Preview 06-05 and it acknowledges its own limitations and reasoning gaps; however, the benchmark results indicate otherwise.
Providers should stop publishing benchmark results and allow the quality and standards of the models to speak for themselves. It is the quality that determines the bench results, not the other way around. I presume that you do not extensively use your own models.
The only advantage your models have is their extensive token count, which is unparalleled.
This news is quite disappointing. After conducting actual testing, our team found that the 06-05 version performs significantly worse than the 05-06 version, yet now the official team is planning to discontinue the better-performing 05-06 version.
Itās frustrating to see that the version with superior performance in logical reasoning, text comprehension, and generation quality (05-06) is being phased out in favor of an inferior one (06-05). This seems like a step backward for users who rely on consistent, high-quality performance.
I hope the official team reconsiders this decision or at least addresses the performance issues in 06-05 before making the switch mandatory. Discontinuing the better version doesnāt serve usersā best interests.
Glad to see that the 0605 performance issue is verified! I had long been suspecting its performance compared with 0506 and got very frustrated because it contradicts benchmarks ranking.
I notice that 0605 tends to āthinkā (i.e. more liberty in generation process), but its accuracy and comprehension is objectively missing!
I was able to simulate thinking process in the answer itself. I noticed that 06-05 is extremely lazy and itās trying to avoid long analysis as much as possible
It feels that laziness mostly in the decision making part. For example, if you start deep analysis by 05-06 and 06-05 would continue that, everything is completely fine. But if you ask to do the same 06-05, it would cut corners as much as possible.
I have been super happy with the quality of the 05-06 version in creating structured output for security tasks, outperforming any other model I have seen before. Now with the 06-05 model this is totally broken, it starts hallucinating stuff hardly related to the task, itās no longer useful for the security tasks I am working on.
Is there some way to stick for the older model beyond June 19th so we have some time to find workarounds?
So, with no new activity today (June 12, 2025); Iāve now started getting "
Youāve reached your rate limit. Please try again later" - I guess I donāt understand what the current policy is with https://aistudio.google.com/prompts. It would help to clarify what the actual non-api use policy is and what the actual rate limit is (and how to change rates). thanks
I have an AI agent that works really well with the 05-06 version but is too slow with 06-05. It would be great if you can keep Gemini 2.5 Pro Preview 05-06 up for as long as possible. Its deprecating too soon it seems.
Latency is also prohibiting our switch to 06-05, and weāre ready to switch to OpenAI when 05-06 deprecates. I would appreciate being able to set and forget on a model name. Also, if there was a single model name that routed to the latest experimental, so at least we donāt have breaking code when experimental models are dephased, that would be helpful.
Strongly urge the Gemini team to reconsider deprecating model 05-06. The performance drop in 06-05 is stark: weaker analytical capabilities, less coherent code generation, and noticeably shorter/less detailed outputs. For users needing reliable, high-quality results, 05-06 is the far superior tool.
Please keep it operational until 06-05 is genuinely an upgrade, not a downgrade
Same experience here. Iām trying to migrate my app to the 06-05 version but performance degrades noticeably. The API starts behaving in unpredictable ways (Iām using the OpenAI compatible API, and when Iām streaming responses sometimes the new model leaks āthoughtsā into the regular content without any metadata marking chunks as such, and it also sometimes just seems to get stuck which 05-06 doesnāt).
Iām quite worried - the concept for my app seemed very promising on 05-06, but for now it seems that on 06-05 the user experience would really start to get in the way of adoption.
Requesting the Gemini team to reconsider the depreciation, we found 05-06 to perform excellent for our use-case and switching to 06-05 will require re-running multiple baselines costing us 1000s of dollars because these are long-thinking models that are expensive.
Requesting Gemini team to atleast delay the deprecation of the 05-06 model by a month so that we can start the painful process of redoing prompts and evaluations so that we get the same accuracy using the recently released GA model or migrate to other provider
Hey everyone, thank you for your feedback! Given the feedback, weāre delaying the deprecation date for Gemini 2.5 Pro Preview 05-06. Iāll follow up shortly when weāve finalized the new plan.
@Vishal this is brilliant news, and genuinely appreciated by so many of us who rely on the unique capabilities of 05-06! seriously thank you and the Gemini team for listening to the community feedback and delaying the deprecation, it definitely gives us some much-needed space to breathe.
While weāre all looking forward to the new plan, and as we keep using 05-06, could we perhaps get some assurance or insight into ensuring that the preview version (05-06) will continue to operate at its established performance baseline until its eventual, newly defined deprecation date?
Iāve recently perceived some fluctuations in its output quality. Its performance had been consistently strong, However, Iāve noticed a subtle shift just today, June 19th, which was the day it was originally set to be deprecated. These newer responses donāt always reflect the consistent depth and nuanced understanding weāve come to rely on from 05-06. It would be very reassuring to know that it will be maintained in its full, original performance level + characteristic strengths for the remainder of its availability, this would allow us to continue our work with confidence and make accurate assessments as we plan for any future transitions ^^
really appreciate you guys taking the feedback on board, and looking forward to hearing about the new plan!