[Bug] Gemini 3.1 Pro leaking raw _thought blocks and stuck in infinite "Done" loop

Hello guys,

I’m experiencing a severe (and somewhat comical) bug with the Gemini 3.1 Pro model.

I wanted to test how the model would respond to a prompt, as I had just run the exact same test inside the Cursor IDE. However, when asking this simple routine development question (“como ativar o cloudflared e rodar o uvicorn?”), the model failed to format its response properly. Instead of providing the final markdown output, it leaked its entire internal reasoning process and then completely broke down into an endless loop.

Key Issues Observed:

  1. System Prompt/Instruction Leak: The model outputted its raw thought process starting with _thought CRITICAL INSTRUCTION 1:, revealing its internal rules for using tools like view_file and grep_search.

  2. Exposed Monologue: It printed out its entire internal debate on whether to use tools to read my pyproject.toml and app/main.py or just give a generic answer.

  3. Infinite Token Loop: At the end of its reasoning, instead of closing the thought block and returning the text, it got stuck in a massive, recursive loop repeating phrases like “Done”, “Outputting…”, “Yes”, “End thought” for hundreds of lines (literally a wall of text).

Impact: This behavior is burning through context/tokens unnecessarily and completely breaking the chat UX. It seems like the model is failing to recognize its own stop sequences or failing to transition from the <thought> state to the actual response state.

I’ve attached screenshots showing the UI breaking and a snippet of the raw output log.

Is this a known issue with the current routing of 3.1 Pro on the platform? Let me know if you need any trace_id or session logs to debug.

Antigravity Version: 1.18.4
VSCode OSS Version: 1.107.0
Commit: c19fdcaaf941f1ddd45860bfe2449ac40a3164c2
Date: 2026-02-20T22:30:09.460Z
Electron: 39.2.3
Chromium: 142.0.7444.175
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Linux x64 6.17.0-14-generic
Language Server CL: 873061499

Hello @Carlos01,
Thank you for bringing this to our attention. We have escalated the issue to our internal teams for a thorough investigation.

To ensure our engineering team can investigate and resolve these issues effectively, we highly recommend filing bug reports directly through the Antigravity in-app feedback tool. You can do this by navigating to the top-right corner of the interface, clicking the Feedback icon, and selecting Report Issue.

2 Likes

Hi @Abhijit_Pramanik

Thank you for the quick response and for escalating the issue! I will go ahead and submit the report through the Antigravity in-app feedback tool right now so the engineering team can get all the necessary background data.

Let me know if you need anything else from my end.

3 Likes