Subject: Feedback: Gemini 3 Pro Preview - Significant regression in Reasoning, Context Retention, and Safety False Positives compared to 2.5
Context:
- Platform: Google AI Studio
- Model: Gemini 3 Pro Preview
- Comparison Baseline: Gemini 2.5 Pro (Stable)
- Date: Dec 5, 2025
Hi Google Team,
I’ve been extensively testing the new Gemini 3 Pro Preview in AI Studio. While I appreciate the speed improvements, I am experiencing severe regressions in reasoning capabilities and context adherence compared to the current stable 2.5 version. The model feels “lobotomized” in complex workflows.
Here are my 4 core issues:
1. Over-optimization for “Quick Fixes” (Loss of Nuance)
Compared to v2.5, v3.0 seems aggressively tuned for immediate resolution. It rushes to provide a final output without sufficient internal reasoning. Even with explicit system instructions to “think step-by-step” or evaluate first, the model forces a quick solution. It feels like I have to use a crowbar to get it to slow down, and even then, it struggles to maintain a deliberative pace.
2. Inability to Maintain Evaluative Dialogue
When the task requires evaluating multiple architectural solutions or discussing pros/cons, v3.0 fails to engage in a back-and-forth exchange.
- Behavior: It interprets any form of critique or follow-up question as a command to immediately generate “fixed” code.
- Impact: It is nearly impossible to have a constructive debate about design choices. It skips the “Why” and jumps straight to a (often premature) “How”.
3. “Tunnel Vision” & Weak Adherence to Project Briefs
There is a noticeable degradation in how v3.0 handles static Project Information (System Instructions) over the course of a long session.
- Issue: Unlike 2.5, the preview model develops “tunnel vision” very quickly (severe recency bias). It ignores the broader project context defined in the briefing and focuses solely on the immediate prompt.
- Result: Logic that violates the initial constraints is generated because the model has effectively “forgotten” the global rules set at the beginning.
4. Aggressive & Context-Blind Safety Filters
The safety guardrails in 3.0 seem to have regressed in terms of contextual understanding, triggering false positives on harmless creative writing content.
- Example: A generated story set in Victorian England involving Sherlock Holmes was blocked.
- Triggers: The prompt contained the words “girl”, “19 years old”, and “street”.
- Context: The character was a flower seller being questioned by the detective.
- Observation: The filter reacted to keywords in isolation, completely ignoring the harmless narrative context. This is stricter and less intelligent than v2.5.
Summary:
Currently, Gemini 3 Pro Preview feels unusable for complex, iterative development tasks. It behaves more like a fast autocomplete engine than a reasoning partner. Please address the context retention and the over-eager “fix-it” reflex before moving this to stable.
Has anyone else experienced this “tunnel vision” with the new Preview?