Everyday 503 , are you sure??
Same issue here
. Iāve migrated from OpenAI, but am quickly regretting that choice
Conversation
Thanks for starting this thread, but I think we can conclude that Google (@Logan_Kilpatrick) is not going to fix this. Rumors are building up about the release of Gemini 3. I believe all energy is focused on the Gemini 3 release so they can burn this ship (Gemini 2.5) down.
I think before you want to destroy a ship, first find out what itās made of, what equipment it carries, and what kind of people it carries.
Once you know that, you can then decide whether or not to destroy the ship, because your question is causing it to be destroyed first.
Agreed, which is lame enough because while sure, if 3.0 is such a change that this issue goes away, then why bother, it certainly doesnāt give me any warm and fuzzy feelings that this behavior will continue. Iām pretty sure weāre getting charged for the empty responses, but itās challenging to know for sure. I put in a feature request where the request should return an identifier to finalize the costs of token usage relative to whatever rate youāre at based on the token count. It would be nice to have this level of itemized billingāknowing that this request costs $10 or $1.
Youāre just requesting a feature, not specifying what kind of feature you want.
Pricing is a question. How can you discuss pricing when you donāt know the exact feature request you want? Please clarify.
Same issue here.
. Iāve migrated from OpenAI, but Iām quickly regretting that choice.
Whether you migrate or transmigrate, youāll still face the same problem, but with a different concept.
āModified by moderatorā
For real? I was using openai interface, has this issue randomly show up, either 2.5-flash or 2.5-pro has this randomly come up, break the flow. I thought it was the interface issue, but this long thread seems to prove me wrong. Does anyone have a better solution on this other than retry?
Same issue here - **gemini-2.5-flash streaming stops prematurely with finish_reason=STOP in development environment, but works fine in production with identical code**. Input/output well within limits, no safety issues. Development environment has more complex context which seems to trigger this bug. Tried prompt completion instructions and prefix rotation with limited success. Any other workarounds?
We are experiencing the same problem when using @google/genai in Node.jsāthe response consistently fails to return.
This makes the model unusable in production, and weāve been encountering this issue for some time now. Itās concerning that such a critical problem has not been prioritized accordingly.
Seems like I need to start a new thread. This issues is not only present in 3.0, but my retry work-around no longer works. I am working with our account team and will post any findings.
The key here is that text and candidates are None, along with most of the other fields. Was really hoping this would just go away but seems it just got worse.
We are experiencing the same problem when using @ google (/groups/google)/genai in Node.jsāthe response consistently fails to return.
For me this happens when using function calls.
Iām so glad I found this thread. I spent two days trying to debug this only to find out that itās the model delivering empty results. So, as others have stated here: this is not usable for production.
I tried Pro 3, Pro 2.5, Flash 2.5 - all with the same issue.
I canāt believe that this is such a longstanding bug. I mean, donāt you (Google/Gemini) want people to use your API?
Can confirm same issue with flash version.
Issue is reproducing only when using streaming request.
Weāre using systemInstructionswith search tool, in that combination it is not working.
The issue in not the genai client for the node.js, Iāve also tested raw fetch and got empty HTTP body with an status code of 200. The results for some prompts are pretty consistent.
Iāll try removing system instructions and/or using thinking budget.
This issue is so-classic, long standing issue on gemini API. All of gemini familty has same issue. It return just empty (but not error!) response and it breaks various LLM-based infrastructures. Please, Google, Fix this ASAP. All other major LLM services (OpenAI, Anthropic, Grok and so on) does not have this issue.
The issue still persists. Dear Google, can you please fix this? I would love to use the Gemini API, but in this way it is unusable.
This is still an issue. The same issue described in April (8 months ago). How are we suppose to rely on this model for production use cases?
Also i think the docs are out of date. Shouldnāt the role on this be ātoolā and not āuserā?
Iām encountering the same issue, and it always happens with functionCall.
Iāve experienced this problem on both Gemini 2.5 Flash-Lite and Flash.
After the API returns a functionCall, I append the functionResponse and call the API again. I then receive an empty response like contents:[{role:'model'}].
This issue seems to occur more easily during specific time periods. Occasionally, I successfully receive the correct response.text.
When this problem occurs, if I continue the conversation after the functionResponse, the dialogue flow seems to be able to proceed. Therefore, I tried to directly add a piece of text as a system hint within the functionResponse content, and this seems to largely prevent the issue.
Example:
contents:[
...,
// function_call content
{role:"model", parts:[{functionCall:{name:..., arg:{....}}}]},
// function_response content
{role:"user", parts:[
{functionResponse:{name:..., response:{...}}},
{text:"<system>Please respond to the user based on the response or proceed to the next step.</system>"}, // system hint
]}
]
By adding this system hint, Iāve found in my tests that it seems to prevent the issue of receiving an empty response.text after the function calling. However, it might have some impact on the output, so careful tuning of the prompt and handling of the output text is required.
I believe this is not a standard solution. I couldnāt find any relevant experience or documentation about adding extra information within the functionResponse content.
I still hope the Google team can resolve this issue as soon as possible.
I used this approach and feel like itās improved the situation. thank you for posting it!
