So, going through the thinking process was my way to learn while it generates code. I am so disappointed it has been removed. Why not just remove the thinking part from the UI itself if we don’t get to see its thinking?
Google really be shooting themselves in the foot right before Deepseek R2 and new GPT models drop. Perhaps things were getting too easy for them and they wanted to give their competitors a fair chance.
Honestly, yeah. I feel like this move definitely negatively affects developer sentiment towards their product. They’re showing how unstable the platform is and purposely lobotomizing models (that people are paying for at that) to just charge more for likely the product that we all partially trained.
This is deeply disturbing. I was really rooting for Google with this one. I hope another company will soon be able to replicate the performance of 03-25.
How the hell should i be able to Create an System Instruction in AiStudio or via API without knowing how it affects Gemini’s thinking behaviour
AND is this raw output, or was the output previousely generated and then summarized, or did internal thinking output this sumarization, SINCE this takes the advantage of creating an chain-of-thoughts / tree-of-thoughts for an guided output, SUMMARIZATION ONLY AS RAW OUTPUT doesent help the ai structuring its process behaviour precisely, This summarizing is mainly useless towards helping the ai.
Ai uses this structured output to bring more and more detail to final output and summarizing doesent have detail (its useless) this summarization is more or less already inside internal thinking
Detailed output is static and an base structure which thinking adhers on to have an static guided output, thinking focuses on outputting high detail and making thoughts STATIC and not in some kind of dynamic mess, that can CHANGE anytime and get reduced in quality
So the summarizing has delay, so i think gemini has already output it raw and then summarizing it: Meaning.. Google doesent save tokens, because thinking output summarization has delay meaning it probably is outputting summarizations from raw hidden previouse output, meaning, they spend more tokens, but they can sucsesfully hide its behaviour from the users?
Sounds like the only purpose is to hide Gemini’s functionality from users
Yeah, basiccly you can throw away system_instruction, because you dont know what it does anymore, raw input is most likely be still generated, gemini is outputting summarization in delay, meaning it came from raw input
I can’t count how many times I was reading the chain of thought of a model and realized that it subtly misunderstood my requirements or question or even just didn’t manage to process a document (which it would then pretend it had found in the final output). With every CoT now being a variation of “I started working on this”, “I am completing this task”, etc. It has become fundamentally useless to me, because it’s not actually showing thoughts and reasoning paths anymore.
This is incredibly disappointing for me. I’m basically just echoing what others have said, but:
This continues a frustrating trend of Google making their models worse? 03-25 2.5 Pro was the best model i’d ever used, then it got changed to 5-06, and now it’s got even worse
With the introduction of summarized CoT, any system instructions which are intended to modify or utilize Gemini’s CoT are rendered moot
There is no way to see Gemini’s CoT to check whether or not it is actually understanding instructions
There is a demonstrable change in output quality, and there’s no real way to improve prompts to improve output - because there’s no way to see what the model is misunderstanding
It really seems, as another user said, that the ONLY purpose of this is to reduce and obscure gemini’s functionality, which to me even presents other challenges than just quality.
And on top of all of this, the which has proper thinking is locked behind $250 a month paywall - and its CoT is worse than the model we had been using.
Was this the plan all along?
Rapid prototyping. Now I have to wait up to 5 minutes for a useful answer. Before it was seconds. (For example, I frequently use a divergent/convergent thinking approach “list 200 possible candidates → rate → select top 10”)
Extremely often the CoT shows wrong assumptions by the LLM that should have been specified in the original prompt. This includes user errors, where I omitted crucial context. Final output often doesn’t include this.
Validating model assumptions and direction it takes was the best diagnostic to understand and improve my own prompts, and get actual valuable outputs from Gemini models. Without it, my confidence in the outputs drops significantly and my ability to fine tune prompts to improve it degrades significantly.
Bring it back. Summaries are not a replacement in any way.
Please bring back the 2.5 pro everyone loved from March and the CoT. The new model just feels worse in every way and it’s definitely making me rethink my subscription.
I won’t lie and say I’m not a google fan boy, but this just feels like a slap in the face.
This is incredibly frustrating. I understand that there is some inherit danger with showing the full CoT, but for devs this is a significant downgrade. The CoTs helped enormously in tunig agentic workflows correctly.
Frankly, the summaries are so infantilizing that they turned me off from Gemini outright. I will henceforth be switching to different models for all my applications unless the raw CoT comes back. This is incredibly patronizing on top of a long, long chain of other failures.
Made an account just so I can add my frustrations to this. This update is just bad. It is. What is the point of the chain of thoughts if all we get is a summary that sounds like a e-girls journal? “I’m now analizing the user’s request.” Um. What? No? You really need to undo or roll back this update.
I agree that this is definitely a step back. The chain-of-thought being transparent was a great differentiator for Gemini compared to competitors, and it was extremely useful to debug and understand how the model operated. The fact that it is now obfuscated and watered down is deeply concerning and objectively makes the model worse. The fact that a change this impactful was made without warning, or so much as an announcement, as if we are just supposed to not notice it happened, is both disheartening and severely concerning for the future prospects of Gemini. How exactly are we supposed to work on projects that rely on it if models receive direct downgrades on a whim?
Thanks for engaging with the community on this. I wanted to add my voice to those expressing concern about the removal of the detailed thinking process.
For me, and it seems for many others, those step-by-step ‘raw thoughts’ were incredibly valuable. It wasn’t just about seeing that the model was working, but how. When it used to lay out phases like ‘identifying the core question,’ ‘evaluating information,’ etc., it was like getting a mini-tutorial in structured thinking with every query. This was a huge help for me in improving my own analytical skills, much like learning from a clear, methodical teacher.
The new summaries, like ‘I’m analyzing…’ feel much more opaque. While I understand the model is still processing, that transparency into its reasoning, which was so good for learning and for debugging prompts when things didn’t go as expected, is now gone. It’s harder to understand why the model might have misinterpreted something or to refine my prompts effectively to get the nuanced results I need.
I really hope you’ll consider the feedback from users who relied on this feature. Perhaps it could be an option to toggle between the summary and the detailed view? Losing that insight feels like a significant step back for those of us who used AI Studio not just for answers, but as a learning and development tool.
Agreed. You can correct prompt if CoT’s wrong, and it made it a less of an black box where you cant see a thing. Overall it felt more natural, and i think the best way is to make it optional toggle