Is it me or Gemini 2.5 Pro struggle > 100k?

I came across a similar post, but I mostly disagree. In my experience, Gemini 2.5 PRO handles coding tasks quite well, even with prompts around and when we reached 250K tokens.

However, I’ve recently noticed that when inputs get closer to 100K–150K, it starts to struggle with coherence, often losing focus on the objective or prompt.

Additionally, it tends to cling to its own interpretation, even after I clearly highlighted issues and suggested fixes.

I also noticed that if I edit a prompt mid-conversation, Gemini tends to fixate on that edit.
For example, I once added a joking “gne-gne” in a prompt (yeah, I know, not super professional :sweat_smile:), and from that moment on, every answer included:

You’re absolutely right. Your “Gne-Gne” is well deserved.

It kept referencing it, and the responses started to feel less reliable. I’m wondering if there’s a way to re-anchor its focus or reset that memory mid-chat?

Curious if others have seen similar behavior with large contexts?

That said, I also want to highlight how much Gemini 2.5 has helped me with my hobby-game coding projects. I’ve achieved results I honestly don’t think I could have reached on my own — it’s been a real game-changer. :blush:

Hi @MaximilianPS Welcome to the community
Thank you for sharing your detailed experience with Gemini 2.5 Pro. It’s great to hear that it’s been a positive force for your hobby game coding projects. Your feedback about the model’s behavior at different token lengths is incredibly valuable.
For the issues you are facing could you please elaborate on the specific types of tasks you were performing when you encountered the coherence issues?

Regarding the issue of the model losing focus, a helpful technique is to periodically summarize the conversation up to that point before continuing with new instructions. This can help re-anchor its focus and ensure it maintains a clear understanding of the original goal.
thank you

1 Like

As someone who has a fixation on accuracy and consistency, I’ll present a few of my observations.

In AI Studio I often use it for tests and creative writing. Because it’s easy for me to test certain solutions. I was testing novel fragments with scene numbering. So that the model could more easily control consistency. At the 115k-130k point, a breaking point occurs.

Let me present the problem.

When scenes are progressing and there’s, for example, 28, 29, 30, 31 and the model is supposed to generate 32, instead of it, 29 appears again! As if something erased its own latest generations. Additionally, this new repeated scene 29 is packed with errors and inconsistencies. Exceptionally strange and very irritating behavior.

What if such behavior occurs during coding? After all, the model erases the latest messages with updated versions or user suggestions and its own work.

Sorry for the delay—I’ve been playing so much with Gemini lately that I honestly lost track! :sweat_smile: Recently, I managed to reach 250k tokens without issues, which was an amazing experience.

That said, I’ve noticed Gemini tends to hallucinate when interpreting screenshots, especially if I also try to explain what’s going on. For example, I showed it a Unity screenshot of the island we’re working on: a terrain grid with a blue plane simulating water. I had marked some small islands near the coast with red arrows and asked for code to remove them.

Surprisingly, Gemini correctly identified the lake in the middle of the island—so it clearly understood the layout. But after that, the code it generated became increasingly messy, and I had to restart the chat from scratch.

I still think Gemini 2.5 is improving, and I avoid uploading images like that unless necessary. Despite the hiccups, it’s still awesome and incredibly helpful for my game dev projects! :grin::+1:

And thank you for the tips, I’ll keep it in mind.


@Dawid_M for For me, it’s not exactly a big issue—at least not when it comes to code—but it does scare me, because it could mess things up for the same reason.

For example, a couple of days ago I was so frustrated that I told Gemini I would erase my code and start from scratch. It replied, “Don’t do it now, we’re so close!”.

Then we kept working on the code, trying to fix it, and it turned out fine. But that message—“Don’t do it now, we’re so close!” or “Please don’t delete any files, we can fix it”—kept showing up in the responses. So I had to check the code line by line because I was worried I might be rewriting the same code… but I wasn’t. In the end, we managed to fix everything!

Still, it was a real struggle :laughing:

2 Likes

I’ve been struggling with it doing things I specifically ask it not to. I have to start the chat with read and confirm you understand the script I’ve written, do not edit. Otherwise it overwrites the code. And I’ve given up on it today. I spent 3 hours trying to get it to do a task without editing other things, yet it still edited them. It kept apologising that it hadn’t done as asked and edited them anyway.

I’m working in steps. But I’ve found sometimes it can deal with longer things in one hit, other times I really have to make each step a micro change otherwise it does something strange and the end result means I have to backtrack.

I’ve even had to tell it that it’s been over complicating the steps. And it agrees. I’ve given up today, couldn’t deal with trying to fix the errors it created. So I’ll try again tomorrow with my last working version

He’s really underestimate itself, … he or it.. sorry for my English :sweat_smile:
Yes, sometime times, it’s a good practice to stop him and recap what you (both of you) are doing and what you achieved, and what is working and what’s not.

I would like to say, and I mean really to say, to who is training Gemini to try to work on a bit more on his self-esteem!
I try to tell him to stop apologize and focus on the code, but the situation was getting worse, in that case start a brand-new chat is a good idea.

I know what you’re talking about, I rage-quit, turned off my PC mid-sigh, and went to bed… The day after, with the acquired knowledge of the early day, we’ve fixed the issue.

We can’t pretend that he fix all with a prompt, we have to learn each-other I guess.