I’ve been experiencing a consistent bug for the past few days with gemini-3.1-pro-preview in Google AI Studio. The model frequently returns incomplete or truncated responses — especially when generating code or structured answers — even though the dashboard log traces show a normal finishReason: "STOP". This is not a UI issue; the backend response itself is cut off mid-sentence or mid-block.
Exact message ID** (visible in the dashboard traces/logs): bjPJaduaAqGBmtkP_KHGoQI
{
"model": "models/gemini-3.1-pro-preview",
"temperature": 1,
"topK": 64,
"topP": 0.95,
"endTokens": [],
"tokenLimits": 65536,
"responseMimeType": "text/plain",
"safetyCatFilters": [
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "OFF"
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "OFF"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "OFF"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "OFF"
}
],
"enableCodeExecution": false,
"enableFunctionCalling": false,
"functionDeclarations": [],
"enableAutoFunctionResponse": false,
"enableSearchAsATool": true,
"googleSearch": [],
"enableBrowseAsATool": false,
"responseModalities": []
}
The response stops abruptly. The output is clearly missing the remainder of the code block or explanation. Retrying the same prompt (or slight variations) sometimes succeeds, sometimes fails again at the same point. Token usage is well under the 65k limit.
This matches symptoms reported in these related threads (but my case includes the exact message ID and full request parameters for easier reproduction):
- Code Generated by 3.1 Pro gets truncated in AI Studio — code blocks are truncated or malformed only in AI Studio (not in the regular Gemini app).
- Gemini 3.1 Flash Lite early response with finishReason=STOP — same normal STOP reason but unfinished output.