I am running into an issue with the Gemini 2.5 Pro model. I have a tool named “code” which can be used by the model to run code in a sandbox. It seems like Gemini 2.5 Pro just ends the turn when it tries to call the tool. No errors appear - the stream just stops. I am using Vercel AI SDK.
It ends the turn without any errors and without calling the code tool:
Other tools like my ‘command’ tool consistently work for this model
When the code tool is successfully called by a different model (like Claude 3.5 Sonnet) and I switch to Gemini 2.5 Pro, the model is suddenly able to consistently call the ‘code’ tool
When I let Gemini 2.5 Pro call a different tool like my command tool and after that I let it call the code tool, it seems to work most of the time as well
I have no idea why this is happening as from the above findings Gemini 2.5 Pro is sometimes able to call the tool, so that should not be the issue. It can also call other tools without a problem. Other models are also able to call tools without problems.
I am facing the same issue, at times when I execute the tool and return the result back gemini sends back an empty response and the conversation ends. Seems like this happens randomly.
From the picture (repeated call to the API) with Gemini 2.5 Pro Preview it is possible to see that the model randomly return None values instead of FunctionCall objects.
I’m experiencing the same problem. I’m using Langchain’s ChatOpenAI to call the Gemini 2.5 API via OpenRouter, and sometimes the response doesn’t include any tool calls, even though it definitely should.
It seems like a case of hallucination to me, because when I enabled a retry mechanism, it occasionally returns the expected tool calls.
What is the mode that you are using? I have found that ANY works best in combination with a tellUser function made available to the model if they are user-facing.
Been having the same issue since 2.0 flash. How has this not been addressed by either Google, Vercel, or Langchain? What is going on here? “Modified by moderator”
Relieved to see I am not the only one having issues. Did anyone found a solution yet? It seems like this is also happening with the the latest Gemini 2.5 Pro and gemini-2.5-flash-preview-05-20 model
I am still not sure if this is a model issue or a Vercel AI SDK issue. All other models do seem to call tools correctly except the Gemini models
it’s a bummer as this makes these models unusable for me
Thanks for bringing up this issue. This definitely seems to be a bug.
To help us debug this issue, can you please DM me with your email IDs, tool that you are using and error message and logs.. I will escalate and have Engineering team look into these issues.
We have the same issues with our trained 2.5 Flash and 2.5 Flash Lite models. In our case its chain calling that is broken, where the model calls for data and should proceed to call the display_chart but only does 50% of the time. Just return an empty string when it fails. It makes charting a no-go on these models, a real shame because we like the analytic output much better than OpenAI’s models. Btw, I don’t know how to DM you.
@Krish_Varnakavi1 Hello, I have the same issues here, it happens on all gemini 2.5 series, i enbled the thinking and it says it will use the tool but no tools are eventually called, then it just returns empty str with finish reason: stop. Btw i can not find DM entry point as well.
there is no message button when click on your name. I used vercel ai for sdk and openrouter for model provider, i wonder how to debug if it is a tool schema mistach error, perhaps i should ask this question to vercel instead. However there is another problem emerging that even if the tool call is sccuessful and returns value, the model will just end it without emitting any final response after tool call, do you have any suggestions on how to resolve this?
For the empty response case, I suspect it might be because I’m asking it to generate a finalized summary at the end of the turn instead of sending status updates between each tool call, causing the model to lose attention on “end of turns.” Can I solve this by adding such instructions on every return of tool calls?
Wanted to chime in here as well, no matter if I use Gemini 2.5 Flash or Gemini 2.5 Pro, I tend to run into tool calling issues.
It is super apparent when adding tools with bigger input schemas, but given how I can switch my implementation to OpenAI models or Anthropic’s models and they run all the time, it’s obviously hinting at issues with Gemini.
Hi. I’m a Googler working on a side project called CodeRhapsody, like Claude Code, but better, IMO
It woeks great with Claude Sonnet 4.5 and 4.0, but I have this exact problem mentioned above where tool call chains just end, typically after just one tool call. If I beg the AI, it will call twice in a row. The tool call scheme is fairly complex. There are not many tools, but the memory compression function takes a lot of parameters, and the descriptions of the tools is long.
I’m happy to share code, as it is all Google’s in any case. Let me know who to add to the private github project, and I’ll add them.