Hi! It seems like the new Gemini 2.5 Pro (05-06 update) is behaving differently. Before, when I asked for things like “write an email” or “find information online,” the model would think through the answer in detail, almost like it was “processing” before responding. If you compare the “thinking” slider for the same prompt in the old and new versions, the difference is striking: the old Gemini 2.5 Pro exp 03-25 wrote lengthy, self-reflective text—asking itself questions, researching, adjusting logic, and polishing answers like a human would. It felt creative and thorough.
Now, responses come faster but feel less thorough—like it skips the deep analysis step. The new version barely writes any “thinking” text before delivering the final answer, making it seem rushed. For example, in regular requests, it reacts almost instantly, but when I give coding tasks, it still takes longer and thinks more carefully.
I’ve also noticed replies have become overly polite and vague, similar to ChatGPT, whereas earlier version was more direct and informative. I miss that detailed, “smart” depth, especially for non-technical tasks. Could we restore the balance? Let the model analyze regular requests as deeply as it does coding tasks—even if responses take slightly longer—and bring back that visible, creative problem-solving process. And maybe dial back the forced formality—clarity matters more than politeness. Thanks!