Before the new Gemini Pro released, everything was great.
But after it came out, it don’t process Thinking which it should be since I send an whole document for it.
I had to put <thinking>
at the begin of request to get it thinking, otherwise it won’t.
I’m experiencing a lot of thinking issue.
Many people have a problem with the ‘thought’ (thinking) module. Unfortunately, as soon as the context becomes longer, the module stops being used.
Technical Report: Inconsistent Reasoning Display in Gemini 2.5 Pro (Post May 2025 Update)
Issue Summary
A bug has been observed in the consistency of Gemini 2.5 Pro’s reasoning display functionality (model ID gemini-2.5-pro-preview-05-06) compared to previous versions (e.g., gemini-2.5-pro-preview-03-25). The ‘05-06’ model, updated around the May 2025 checkpoint, sometimes fails to display its reasoning process (“thinking steps” or “reasoning module”), when these specific conditions met below. This does not happen with version ‘03-25’, which more reliably showed its thinking steps.
Detailed Observations
When sent complex messages requiring multi-step reasoning, version ‘05-06’ of Gemini 2.5 Pro sometimes provides direct answers without displaying the intermediate thought process. This behavior differs from version ‘03-25’, which more consistently demonstrated step-by-step reasoning.
Key characteristics of this inconsistency include:
-
Happens in Long Chat Sessions: The failure to display reasoning steps is significantly more frequent, or occurs almost exclusively, in extended chat sessions within Google AI Studio that already contain a substantial amount of text (i.e., many previous turns). In new or short chat sessions, the reasoning display appears more reliably.
-
Effect of Retrying Input: In instances where the reasoning display initially fails to appear during a long chat session, resending the exact same input multiple times can eventually trigger the successful display of the reasoning module.
This inconsistent display behavior creates problems in verifying the model’s thinking process for a given response, especially during extended interactions.
Impact Assessment
This inconsistency significantly impacts users who rely on visible reasoning chains to:
- Verify the model’s logical approach and ensure thorough analysis has occurred.
- Understand how the model arrived at its conclusions.
- Build trust in the accuracy and reliability of responses to complex queries.
- Utilize the model for educational or demonstrative purposes where “showing the work” is valuable.
- Debug or refine prompts effectively by observing the model’s interpretation and execution steps.
Potential Causes
The issue’s characteristics, particularly its link to long chat sessions and the success of retries, suggest potential causes such as:
-
Technical Regression or Instability: A regression in the ‘05-06’ model or the Google AI Studio platform affecting the consistent generation or rendering of the reasoning display, especially under specific session states or loads. The success of retries indicates a temporary issue rather than a complete or deterministic failure.
-
Resource Management in Long Contexts: Challenges related to processing, memory, or display limitations within Google AI Studio or the model backend when handling very long chat histories, potentially leading to the de-prioritization, truncation, or dropping of the reasoning display to maintain core response performance.
-
Session State or UI Handling Issues: Problems with the accumulated session state or UI rendering in extended chats within AI Studio that interfere with the thinking display mechanism.
-
Optimization Adjustments with Unintended Consequences: Changes in the ‘05-06’ model or AI Studio, possibly aimed at optimizing performance or output conciseness in long dialogues, that inadvertently or too aggressively suppress the visibility of reasoning steps.
-
Intermittent Communication or Processing Errors: Transient errors in processing or communicating the thinking process data from the model to the AI Studio interface, which may be more likely to occur in sessions with heavy data exchange or prolonged activity.
Requested Resolution
I request that Google investigate and address this issue to restore consistent and reliable reasoning display functionality in Gemini 2.5 Pro (‘05-06’) within Google AI Studio. The ability to see the model’s reasoning steps, especially under the conditions described, is critical for many users.