Gemini 3.0 Pro preview with empty response.text

This is really just a continuation of:

This issues is not only present in 3.0, but my retry work-around no longer works. I am working with our account team and will post any findings.

The key here is that response.text and response.candidates are None, along with most of the other fields. Was really hoping this would just go away but seems it just got worse.

2 Likes

Hi @Bryan_Hughes,

Can you please confirm, if you are still facing this issue?

Hi, I am. I have an open support ticket which has been escalated. I will post here what they come back with.

Hi I also have been facing the same issue, I thought I could solve it by enforcing a schema with minItems and still get back empty responses. I hope you can help me. (The schema contains 1 property required with items and the minItem key set on 1 as defined in the google docs.)

Hi @Miguel_Mendez,

We are not able to replicate the issue at our end. Please share a code snippet or error screenshot, so we can identify the cause.

Gemini API Bug Report: Empty Responses with STOP Status

Date: 2025-12-23
Models Tested: gemini-3-flash-preview, gemini-3-pro-preview, gemini-2.5-pro
Dataset: (100 videos, 5 runs per model)

Summary

gemini-3-pro-preview occasionally returns finishReason: STOP (indicating success) but produces only thinking/reasoning output without the actual JSON response. The model’s internal state believes it generated the output, but the actual candidate output is missing.

Affected Model

  • gemini-3-pro-preview: 5 failures out of 500 requests (1% failure rate)

Evidence

Failed Videos (gemini-3-pro-preview, Run 5)

Video ID finishReason candidatesTokenCount thoughtsTokenCount totalTokenCount
7IPW7 STOP null 5,099 14,028
XOOPP STOP null 3,889 11,836
Y79PC STOP null 2,834 11,835
WISO0 STOP null 6,401 16,021
BIQGN STOP null 4,149 13,256

Response Structure

All failed responses have the same pattern:

{
  "candidates": [{
    "content": {
      "parts": [
        {"text": "**Thinking header**\n\n<reasoning about JSON>..."},
        {"text": "**Another thinking header**\n\n<more reasoning>..."}
      ],
      "role": "model"
    },
    "finishReason": "STOP"
  }],
  "usageMetadata": {
    "promptTokenCount": 9000,
    "candidatesTokenCount": null,  // <-- BUG: No output tokens
    "thoughtsTokenCount": 5000,
    "totalTokenCount": 14000
  }
}

Example Thinking Output (Y79PC)

The model’s thinking text claims success but no JSON is produced:

Part 0:

“I’ve crafted detailed descriptions for each video, focusing on visual elements like the woman’s actions, clothing, and the room’s decor. Refining the descriptions now; ensuring comprehensive detail… The final JSON structure, complete with precise timestamps, is now prepared for its final review.”

Part 1:

“I’ve checked the JSON format, ensuring all fields are correctly formatted and adhering to the specifications. Descriptions are detailed, the start and end times match the intended moments, and it is complete, passing all the final checks. The final JSON is ready.”

Despite claiming “The final JSON is ready”, no JSON was actually emitted.

Request Configuration

  • Thinking enabled: {"includeThoughts": true}
  • Response MIME type: application/json
  • Response schema: Yes, structured JSON schema provided (type: OBJECT with nested ARRAY of OBJECTs, 7 required fields per item including STRING and INTEGER types)
  • Temperature: 1.0
  • Max output tokens: 65536

Note: JSON schema was verified to be correctly included in all batch requests.

Conclusion

The bug appears to be a model-level issue where:

  1. The thinking process completes normally
  2. The model believes it has generated the final output
  3. But the actual JSON response is never emitted to the output
  4. finishReason incorrectly reports STOP instead of indicating a failure
  5. candidatesTokenCount is null, confirming no actual output was generated

This issue was observed only in gemini-3-pro-preview, not in gemini-2.5-pro or gemini-3-flash-previewfor this dataset.
This is what a successful example looks like:
Successful Response

| Field | Value |

|----------------------|-------------------------------------------|

| finishReason | STOP |

| candidatesTokenCount | 478 (has value!) |

| thoughtsTokenCount | 3,844 |

| Part 0 | Thinking text (333 chars) |

| Part 1 | Actual JSON starting with { (1,695 chars) |

1 Like

Is the structured outputs feature done at the sampling level (do you guys enforce grammars)? Any recommendations how to avoid empty responses?, should I use tool calls instead, as I mentioned I am not even getting an empty json and my schema enforces minimum one item

Hi Pooja, I hope you are doing well. I was wondering if you need more information apart from what I sent to solve the issue. Please let me know.

Hi @Miguel_Mendez,
Thanks for providing the detailed bug report. We have escalated this issue to the concerned team for further investigation.