Error 400 when LLM response contains both Text and FunctionCall

I’m using the gemini-2.0-flash-001 model with the Go API.

I have a model set up with a handful of function declarations. I noticed that whenever a call to genai.ChatSession.SendMessage(ctx, inputParts...) returns a top candidate that contains both an (empty) Text part and a FunctionCall part, then the next call to genai.ChatSession.SendMessage(ctx, inputParts...) – where inputParts is a FunctionResponse – would invariably fail with an error 400 and an empty error message: googleapi: Error 400:.

Changing the prompt slightly would avoid the mixed Text and FunctionCall outputs, and the subsequent 400 error for the next call.

Expected behavior: Tolerate LLM responses with mixed Text and FunctionCall parts to allow the subsequent call with a FunctionResponse to pass.

1 Like

Hi @John_Wong ,

When the model mixes a Text part (even if empty) with a FunctionCall, the Go API’s genai.ChatSession.SendMessage(ctx, inputParts...) struggles to handle the follow-up FunctionResponse, resulting in a googleapi: Error 400: with no further explanation

As you’ve already discovered, slightly modifying the prompt to avoid mixed Text and FunctionCall outputs can prevent the error. This workaround is currently the most reliable method until the issue is resolved.

Try to structure prompts so the model clearly chooses either a Text response or a FunctionCall, not both.

For best practices and implementation guidance, refer to Google’s Function Calling documentation