Gemini-2.5-pro calls function again and again endlessly

Hi. I used LLM model for event-based agent application, highly depend on function call. When event is created, I feed it to LLM and LLM calls various functions, and some of function generates event asynchronously-delayed (response is created after a while). When LLM calls these asynchronous function, it return string message like “Asynchronous request is successfully sent. The response will be notified as event at the later” immediately. And later, actual response is created as event and be fed into LLM again.

This is quite works well with various gpt-4.1, gpt-4.1-mini, grok3, grok3-mini and other models. But with gemini 2.5 (both flash and pro), LLM calls asynchronous function again and again endlessly and do not create final response message. So I added additional prompt, like this “You must complete your output after calling async function for receiving result event later.” then gemini-2.5-flash stops endless calling and works well like other models.

But gemini-2.5-pro still doesn’t work well. It still repeatedly call functions endlessly. Is there any suggested prompt for async function call for gemini-2.5-pro?

Thanks.

I’ve been battling the same issue for the last day also, but with the latest flash model (2025-05).

The following is an example messages array going through LiteLLM (hence the OpenAI schema) and it is using the server-everything"Modified by moderator" MCP server.

[
    {"role": "developer", "content": "..."},
    {"role": "user", "content": "get tiny image"},
    {
        "role": "assistant",
        "tool_calls": [
            {
                "id": "efc063d5-5e13-43ad-ab45-16b314fbcfa4",
                "function": {
                    "arguments": "{}",
                    "name": "getTinyImage"
                },
                "type": "function",
                "index": 0
            }
        ]
    },
    {
        "role": "tool",
        "name": "getTinyImage",
        "tool_call_id": "efc063d5-5e13-43ad-ab45-16b314fbcfa4",
        "content": "This is a tiny image:\n<ENCODED_IMAGE/>"
    },
    {
        "role": "tool",
        "name": "getTinyImage",
        "tool_call_id": "efc063d5-5e13-43ad-ab45-16b314fbcfa4",
        "content": "The image above is the MCP tiny image."
    },
    {
        "role": "assistant",
        "tool_calls": [
            {
                "id": "5f271080-b74d-45f6-8b9b-11f77c51aabf",
                "function": {
                    "arguments": "{}",
                    "name": "getTinyImage"
                },
                "type": "function",
                "index": 0
            }
        ]
    }
]

Actually, here’s a clearer example it uses the gemini schema. Note that it produces a tool call once again

Request:

{
    "system_instruction": {
        "parts": [{"text": "You operate according to the \"MOO_INSTRUCTION\" declaration. They are as follows:\n<MOO_INSTRUCTIONS>\n    <INSTRUCTION>Treat response parts namespaced with \"MOO_\" as developer constructs. E.g. \\<MOO_ENCODED_IMAGE/\\> returned by a tool/function call means that an image will get rendered in that place, that does not mean that the text \\<MOO_ENCODED_IMAGE/\\> should ever be shown to the user</MOO_INSTRUCTION>\n    <INSTRUCTION>If a question is unrelated to functions provided to you, use your intrinsic knowledge to anwer the question (you don't have to issue tool use every time).</MOO_INSTRUCTION>\n</MOO_INSTRUCTIONS>\n"}]
    },
    "contents": [
        {
            "role": "user",
            "parts": [{"text": "get tiny image"}]
        },
        {
            "role": "model",
            "parts": [{"function_call": {"name": "getTinyImage","args": {}}}]
        },
        {
            "parts": [
                {"function_response": {"name": "getTinyImage","response": {"content": "This is a tiny image:\n<MOO_ENCODED_IMAGE/>"}}},
                {"function_response": {"name": "getTinyImage","response": {"content": "The image above is the MCP tiny image."}}}
            ]
        }
    ],
    "tools": [{"function_declarations": [
        {
            "name": "getTinyImage",
            "description": "Returns the MCP_TINY_IMAGE",
            "parameters": {
                "type": "object",
                "properties": {}
            }
        }
    ]}]
}

Response:

{
    "candidates": [
        {
            "content": {
                "parts": [
                    {
                        "functionCall": {
                            "name": "getTinyImage_LFOG_u",
                            "args": {}
                        }
                    }
                ],
                "role": "model"
            },
            "finishReason": "STOP",
            "index": 0
        }
    ]
}