OpenAI’s documentation explicitly states that while the tool is called
“Code Interpreter” externally, the model internally knows it as the
“python tool” — and recommends using that term in prompts for the most
reliable invocation.
I’m curious whether Gemini has a similar internal name or concept for
its Code Execution tool.
Looking at the API response structure, the model outputs executable_code
parts and receives code_execution_result parts — which suggests the
model understands the tool through these part types rather than a named
tool call. But I couldn’t find any official documentation that explicitly
describes how the model perceives this tool internally.
Specifically, I’d like to know:
- Does the Gemini model have an internal name for the Code Execution tool
(similar to OpenAI’s “python tool”)? - Is there a recommended way to reference it in prompts to invoke it more
explicitly?
Any insight from the team or community would be appreciated!