Using the REST API (Models | Gemini API | Google AI for Developers) to retrieve the list of models, the response contains this:
{
"name": "models/gemini-2.0-flash-thinking-exp-1219",
"version": "2.0",
"displayName": "Gemini 2.0 Flash Thinking Experimental",
"description": "Gemini 2.0 Flash Thinking Experimental",
"inputTokenLimit": 1048576,
"outputTokenLimit": 65536,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.7,
"topP": 0.95,
"topK": 64,
"maxTemperature": 2
},
{
"name": "models/learnlm-1.5-pro-experimental",
"version": "001",
"displayName": "LearnLM 1.5 Pro Experimental",
"description": "Alias that points to the most recent stable version of Gemini 1.5 Pro, our mid-size multimodal model that supports up to 2 million tokens.",
"inputTokenLimit": 32767,
"outputTokenLimit": 8192,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 1,
"topP": 0.95,
"topK": 64,
"maxTemperature": 2
},
{
"version": "001",emma-3-27b-it",
"displayName": "Gemma 3 27B",
"inputTokenLimit": 131072,
"outputTokenLimit": 8192,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 1,
"topP": 0.95,
"topK": 64
},
{
"name": "models/embedding-001",
"version": "001",
"displayName": "Embedding 001",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 2048,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedContent"
]
},
Notice that the json for Gemma is missing the (non-optional) “name” field, which makes the response invalid. The expected fields are documented here: deprecated-generative-ai-python/docs/api/google/generativeai/types/Model.md at main · google-gemini/deprecated-generative-ai-python · GitHub