Summary
Gemini 3.0 Pro inside Google AI Studio shows two completely different reasoning behaviors depending on whether the Canvas panel is open.
-
Without Canvas: model outputs only 1–2 extremely short reasoning paragraphs (“model thoughts”).
-
With Canvas: model outputs the expected 20–40 section long reasoning chain (MaxThinking behavior).
This indicates a UI-level dispatch bug:
Chat mode is incorrectly falling back to MinimalThinking, while Canvas mode correctly activates MaxThinking.
This is not a model quality issue — it is a frontend configuration issue.
Expected Behavior
Gemini 3.0 Pro should:
-
Always use full MaxThinking reasoning depth
-
Produce complete multi-step reasoning regardless of UI layout
-
Behave consistently whether Canvas is open or not
Actual Behavior
Chat Mode WITHOUT Canvas
-
Only 1–2 reasoning paragraphs are shown
-
Very shallow and generic “model thoughts”
-
Cannot solve complex/competition-level reasoning tasks
-
Example output:
Evaluating the Initial Conditions…
Analyzing the New State…
This is far below expected Gemini Pro reasoning ability.
Chat Mode WITH Canvas opened
-
Full long-form reasoning is generated (20–40 segments)
-
Deep and structured multi-step thinking
-
Correct MaxThinking behavior activated
-
Example output excerpts:
Exploring Coin Distributions…
Discovering Operation Properties…
Refining the Valuation Function…
Investigating Coin Interactions…
...
Finalizing Reachability Proof…
These two outputs come from the same prompt, same model, in the same session.
Steps to Reproduce
-
Open Google AI Studio
-
Select Gemini 3.0 Pro
-
Do NOT open the Canvas
-
Enter a difficult reasoning problem (example provided below)
-
Observe shallow reasoning (1–2 segments)
Then:
-
Open the Canvas panel
-
Re-run the same prompt
-
Observe full multi-step reasoning (20+ segments)
This is 100% reproducible.
Reproducible Example Prompt
(Chinese competitive-math problem used in the comparison)
\textbf{5.}\quad
有 $6$ 个盒子 $B_1,B_2,B_3,B_4,B_5,B_6$,开始时每个盒子中恰好有一枚硬币。
每次可以任意选择如下两种方式之一对它们进行操作:
\begin{itemize}
\item \textbf{方式 1:} 选取一个至少有一枚硬币的盒子 $B_j$($1 \le j \le 5$),
从盒子 $B_j$ 中取走一枚硬币,并在盒子 $B_{j+1}$ 中加入 $2$ 枚硬币。
\item \textbf{方式 2:} 选取一个至少有一枚硬币的盒子 $B_k$($1 \le k \le 4$),
从盒子 $B_k$ 中取走一枚硬币,并且交换盒子 $B_{k+1}$(可能是空盒)与盒子 $B_{k+2}$(可能也是空盒)中的所有硬币。
\end{itemize}
问:是否可以进行若干次上述操作,使得盒子 $B_1,B_2,B_3,B_4,B_5$ 中没有硬币,
而盒子 $B_6$ 中恰好有 $2010^{2010^{2010}}$ 枚硬币?
(注:$a^{b^c} = a^{(b^c)}$)
Impact
-
Makes Gemini Pro appear “broken” in standard chat mode
-
Users receive drastically downgraded reasoning
-
Complex tasks become impossible
-
Inconsistent experience across UI modes
-
Causes confusion and incorrect model evaluation
Severity
High — this affects all reasoning-heavy workflows and occurs by default in Chat mode.
Likely Root Cause
UI dispatch bug:
-
Canvas mode correctly enables MaxThinking
-
Chat mode incorrectly switches to MinimalThinking reasoning budget
The model behaves correctly — the UI calls the wrong internal configuration.
Suggested Fix
-
Ensure Chat mode defaults to the same MaxThinking profile as Canvas
-
Remove UI-dependent reasoning configuration
-
Add a visible toggle if different reasoning depths are intended
-
Ensure consistent MaxThinking behavior regardless of Canvas visibility
Attachments
-
Screenshots comparing Canvas vs non-Canvas output
-
Raw outputs from both modes
-
Exact prompt used
-
Steps to reproduce
Thank you
This issue is easily reproducible and critical for advanced reasoning workloads.
Fixing it will restore consistent Gemini 3.0 Pro behavior across all UI modes.
