Gemini-2.5-pro-exp-03-25 Function Calling 500 Error with Complex Schema

Hi everyone!

We’ve been testing the new gemini-2.5-pro-exp-03-25 model via the Python google-genai library and encountered an issue specifically related to function calling with complex schemas.

Scenario:

  1. We define a function (execute_operations) with a relatively complex parameter schema (nested objects, arrays of objects with multiple properties, enums).

  2. We use a system instruction prompting the model to generate complex, dummy data for this function call for testing purposes.

Observation:

  • When using gemini-2.5-pro-exp-03-25 , the API consistently returns a ServerError: 500 INTERNAL when the model attempts to generate a FunctionCall with complex arguments trying to match the schema. The specific error message is:
{'error': {'code': 500, 'message': 'An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting', 'status': 'INTERNAL'}}

Here is a colab notebook that reproduces the error: Google Colab

  • When using gemini-2.0-flash with the exact same code, schema, and prompt, the model successfully generates the FunctionCall with the expected complex arguments.

  • It seems gemini-2.5-pro-exp-03-25 can successfully call the same function if the required arguments/generated call structure is much simpler.

This suggests a potential issue specifically with gemini-2.5-pro-exp-03-25’s handling of function calling when the required arguments structure (based on the provided schema) becomes sufficiently complex.

Has anyone else experienced similar behavior with gemini-2.5-pro-exp-03-25 and complex function call schemas?

2 Likes

I encountered the same problem!

Hi @Bryan_Djafer , Welcome to the forum.

Thank you for sharing the code reproduction, it’s helpful.

I am escalating this to the product engineering team.

1 Like

Hi Gunand, alright thank you very much! Let me know if you need any additional information.
Looking forward to the updates.

1 Like

Hi,

I am also facing same issue.

I am also experiencing issues with function calling. For example, I was trying to run this multi-agent workflow from llamaIndex using gemini 2.5 pro and it either stops prematurely after the research agent has finished its work or it gets itself in a circular reference. Using gemini flash 2.0, the workflow executes flawlessly.
Below is the code I used to integrate the gemini llm into the llamaIndex example code mentioned above. I have tried various configurations for the gemini pro llm but nothing seems to work..

def get_llm(model="gemini-2.0-flash"):
    # Get credentials for Vertex AI
    credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
    auth_req = google.auth.transport.requests.Request()
    credentials.refresh(auth_req)
    if not credentials.valid:
        raise RuntimeError("Failed to refresh Google Cloud credentials.")

    # Base configuration
    llm_config = {
        "api_base": f"https://{region}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{region}/endpoints/openapi",
        "api_type": "open_ai",
        "model": model,
        "temperature": 0.1,
        "vertexai_config": {
            "project": project_id,
            "location": region,
            "credentials": credentials
        }
    }

    if "gemini-2.5" in model:
        llm_config["automatic_function_calling"] = {"disable": True}
        llm_config["tool_config"] = {"function_calling_config": {"mode": "any"}}

    return GoogleGenAI(**llm_config)

I am having the same issue here!! :frowning:

Hi @GUNAND_MAYANGLAMBAM any update on the matter ? :slight_smile: