Today I noticed there’s now three ‘model’ options available to choose from, Fast, Thinking and Pro. Over the past two weeks I’ve been having major issues related to context retention and the model being too aggressive in pruning info even within 10-20 prompts. When I saw the new model options, I thought maybe that’s solved, but if anything, it’s just verified and codified the shortcomings of 3.0 more.
Asking Gemini directly about the differences, it verified that Thinking allows many prompts per day (100-500), but it uses ‘Dynamic’ thinking, which means it may prune/summarize earlier chat turns to maintain speed in as little as 10-20 turns. On the other hand, Pro does not do that, and maintains it’s coherence across long context windows, but…has a 10-20 prompts/day limit.
Honestly, neither of these options solves the problems many of us have that use Gemini as a tool for long conversations / back and forth for brainstorming business or creative ideas, troubleshooting coding or technical problems or really anything else that requires a ‘partner’ that you can work with on something from start to finish.
I understand there is a computational cost related to being able to have a large context window and be able to have many prompts per day, but the current setup basically offers the worst of both worlds. Below is the exact quote from Gemini when I asked it to summarize these challenges:
Core Challenges for Long-Term Projects
-
The “Thinking” Memory Gap User feedback indicates that the standard Thinking mode in Gemini 3.0 can feel less reliable for long chats than the previous 2.5 Pro version. It often struggles with “complex logical retrieval” as conversations lengthen, sometimes claiming ignorance of information visible just a few scrolls up.
-
The “Pro” Usage Wall While Pro (Deep Think) solves these amnesia issues by utilizing a deeper reasoning tree, its extremely low daily cap (as few as 10 prompts for some tiers) makes it impossible to use as a primary interface for active, turn-based troubleshooting.
-
Instruction Adherence Decay In long sessions, models may experience “drift,” where they revert to core training (e.g., being helpful by rewriting text) rather than following your specific project constraints (e.g., “do not rewrite”).
I’d pay more a month (within reason) to have access to the best of both worlds for context retention and prompts/day. You all had it working in 2.5, so not sure why 3.0 is such a drastic step backwards related to this issue. Heck, you could even just give us access to 2.5 again…