Gemma 3 - missing features despite announcement

Hi @GUNAND_MAYANGLAMBAM and @Vishal

Congrats to the team to provide the new Gemma 3 models and the new endpoint on the Google AI API. The announcement blog - Gemma 3: Google’s new open model based on Gemini 2.0 - reads wonderfully. Until someone puts it to the test…

Create AI with advanced text and visual reasoning capabilities

Easily build applications that analyze images, text, and short videos, opening up new possibilities for interactive and intelligent applications

HTTP 400: “Image input modality is not enabled for models/gemma-3-27b-it”
HTTP 400: “Audio input modality is not enabled for models/gemma-3-27b-it”

Tried different images (png, jpg, bmp), video (mp4) and PDF documents - both via inlineData and per File API using fileData attributes.

Create AI-driven workflows using function calling

Gemma 3 supports function calling and structured output to help you automate tasks and build agentic experiences.

HTTP 400: “Function calling is not enabled for models/gemma-3-27b-it”
HTTP 400: “Json mode is not enabled for models/gemma-3-27b-it”
HTTP 400: “Enum mode is not enabled for models/gemma-3-27b-it”

System Instruction

HTTP 400: “Developer instruction is not enabled for models/gemma-3-27b-it”

Code execution?

It’s not explicitly mentioned in the blog. What’s the situation here? Right now…

HTTP 400: “Code execution is not enabled for models/gemma-3-27b-it”

Candidate count > 1?

It’s not explicitly mentioned in the blog. What’s the situation here? Right now…

HTTP: “Multiple candidates is not enabled for models/gemma-3-27b-it”

Not sure what’s the issue is…

However, with the announcement blog I would kind of expect that the mentioned features are available and operational from day 0 on. And not hoping for the best and then those are being added at a later stage.

I’m not sure whether those features have been disabled for the gemma3 model in the Gemini API and the model itself might be capable to deal with everything, like when used locally or deployed in Vertex AI or Cloud Run with GPU…

Would it possible to get an update on those?

Thanks.

7 Likes

Same here. “Image input modality is not enabled for models/gemma-3-27b-it”

We are experiencing the same:

Google AI Studio API returned error: 400 Bad Request {
  "error": {
    "code": 400,
    "message": "Image input modality is not enabled for models/gemma-3-27b-it",
    "status": "INVALID_ARGUMENT"
  }
}

Hi

I am trying to get more info on this issue . Thanks.

2 Likes

thanks
I too am trying to use function calling and the same problem from the main post happens

I cannot get gemma 3 to work as the chat model for an n8n agent. “Developer instruction is not enabled for models/gemma-3-27b-it” or " Bad request - please check your parameters

Google Gemini requires at least one dynamic parameter when using toolsIs" even though there are dynamic paraters set. Is there any updates on this?

+1 - same deal here. litellm.exceptions.BadRequestError: litellm.BadRequestError: VertexAIException BadRequestError - {
“error”: {
“code”: 400,
“message”: “Image input modality is not enabled for models/gemma-3-27b-it”,
“status”: “INVALID_ARGUMENT”
}
}

Hi @Jeff_Blackwood

You got use the chat history parameter to inject the system instruction as a pair of request (user) and response (model) messages prior to sending the user’s prompt.

Cheers

Hi,

Thank you for your feedback..
I’ve been updated by the team that image input modality is actively being worked on and will be available soon.

Thanks

2 Likes

Hello @GUNAND_MAYANGLAMBAM

Thanks for the feedback.
Any updates on all the other missing but announced features?

Cheers

2 Likes

As of now, I don’t have the info for that, but I will let you know if I get it.

1 Like

Hi everyone!

I hope you’re doing well.

Thanks everyone for the feedback, we’ll iterate in our documentation for this.

1 Like