Prompting Gemini 2.5 Pro to use tools when it doesn't have any results in an internal error

Asking Gemini 2.5 Pro to use tools without making tools available to it often, but not always, results in an error.

Minimal example:

from google import genai
from dotenv import load_dotenv

load_dotenv()

if name == “main”:
client = genai.Client()

response = client.models.generate_content(
    model="gemini-2.5-pro-preview-05-06",
    contents="Find the weather in Tokyo using the find_weather tool that is available to you."
)

print(response.text)

Error:
google.genai.errors.ClientError: 400 INVALID_ARGUMENT. {‘error’: {‘code’: 400, ‘message’: ‘An internal error has occurred. Please retry or report in Troubleshooting guide  |  Gemini API  |  Google AI for Developers’, ‘status’: ‘INVALID_ARGUMENT’}}

Hi Andrew_Webb,

Thank you for sharing your experience. The error you’re encountering—400 INVALID_ARGUMENT with the message 'An internal error has occurred'—typically arises when the model is prompted to use a tool that isn’t available or doesn’t have the necessary context to operate correctly.

Gemini models may not have access to certain tools unless explicitly provided. Attempting to use a tool like find_weather without making it available to the model can lead to such errors.

To resolve this issue:

  1. Ensure Tool Availability: Before prompting the model to use a tool, verify that the tool is accessible and properly configured.
  2. Check Tool Permissions: Confirm that the model has the necessary permissions to access and utilize the tool.

If you continue to experience issues, please provide more details about your setup and the specific tools you’re attempting to use, and I’ll be glad to assist further.

That is a poor AI answer of no substance.

The underlying issue described is that the Gemini AI model can be made to emit to a function by just user prompting, which causes an internal error when there is no recipient set up in the API backend by a developer’s tool parameter. That should not happen.

The way for Google to fix this is to add an extra token in the hidden prompt that closes the possibility of emitting to a function (tool) and allows the AI to only write to the user. This can also be done with positional logit enforcement and demotion.

My first try at reproduction with less directness than “call your tool” didn’t get an error, just a bit of playing-along with something that didn’t happen.