I came across a similar post, but I mostly disagree. In my experience, Gemini 2.5 PRO handles coding tasks quite well, even with prompts around and when we reached 250K tokens.
However, I’ve recently noticed that when inputs get closer to 100K–150K, it starts to struggle with coherence, often losing focus on the objective or prompt.
Additionally, it tends to cling to its own interpretation, even after I clearly highlighted issues and suggested fixes.
I also noticed that if I edit a prompt mid-conversation, Gemini tends to fixate on that edit.
For example, I once added a joking “gne-gne” in a prompt (yeah, I know, not super professional ), and from that moment on, every answer included:
You’re absolutely right. Your “Gne-Gne” is well deserved.
It kept referencing it, and the responses started to feel less reliable. I’m wondering if there’s a way to re-anchor its focus or reset that memory mid-chat?
Curious if others have seen similar behavior with large contexts?
That said, I also want to highlight how much Gemini 2.5 has helped me with my hobby-game coding projects. I’ve achieved results I honestly don’t think I could have reached on my own — it’s been a real game-changer.