Summary:
Over the past two days, I have been encountering an intermittent issue with gemini-3-pro-preview and gemini-2.5 models. The API returns a successful HTTP 200 response with finish_reason: STOP, but the returned content is empty (content=).
Critical Observation:
Despite the content being empty, the usage_metadata indicates that a significant number of output tokens were generated (e.g., ~1500 tokens). Specifically, all generated tokens appear to be categorized as reasoning tokens, with no final response text provided.
It appears the model performs the reasoning step but fails to append the final response, or the API response structure for reasoning models is not being parsed correctly by standard clients when no final text follows.
Environment Details:
• Models Affected: gemini-3-pro-preview, gemini-2.5
• Client Library: LangChain Google GenAI /All updated
• Date/Time: Occurring frequently over the last 48 hours.
1 Like
Hi @Haojia_Gu , Thanks for reaching out to us.
Could you please share details about the type of request being made along with response you received when the content was empty with a screenshot? This will help us diagnose this issue effectively.
@Sonali_Kumari1 Thanks for the response. The request logs may contain personal or sensitive information, so I can’t share them publicly on the forum. Is there a private or secure channel (e.g. DM or email) where I can provide the details and screenshots?
I’ll add to this as the previous poster was unable to provide information, what was posted in my case was a Image of a herbarium sheet, with printed and handwritten text on them.
The issue seems to be happening intermittently as requests might succeed 1 run and fail the next with empty content responses.
The Api URL used is : https://generativelanguage.googleapis.com/v1beta/models/gemini-3-pro-preview:generateContent?key=${apiKey}
try {
const imagePart = await fileToGenerativePart(imagePath);
const payload = {
contents: [
{
parts: [
{ text: "Extract all text from this image as plain text, no extra output from you is required, text should not be modified or explained" },
imagePart,
],
},
],
};
const response = await fetch(apiUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
});
Hi @Haojia_Gu,
Thanks for following up. You can direct message me with the request logs. Please provide code snippet if possible.
Hi @Terence_Doets ,
Try updating your request payload to include safetySettings with lower thresholds or all categories set to BLOCK_NONE. This explicitly instructs the model to bypass these default filters, ensuring the text is extracted consistently without being blocked.
Please, let me know if you are still facing this issue after implementing above code changes.