Getting 404 on client.models.update()

I am encountering a 404 error when utilizing the client.models.update() function within the new Python genAI client. The code was working fine a day before.

Could you please assist in resolving this issue? Further details or logs can be provided if necessary.

Thank you for your time and assistance.

1 Like

Hey @sps , Seems like an intermittent issue. I just checked from my end, and the update() function is working. I used SDK version 1.5.0.

Let me know if you’re still facing the issue.

1 Like

Hi @GUNAND_MAYANGLAMBAM

Thanks for testing. However, I’m still getting a 404 error on my end for the google-genai client (currently on version 1.5.0):

Exception has occurred: ClientError
404 Not Found. {'message': '', 'status': 'Not Found'}
  File "/directory/test/app.py", line 75, in <module>
    model = client.models.update(model="gemini-1.5-flash", config=generation_config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
google.genai.errors.ClientError: 404 Not Found. {'message': '', 'status': 'Not Found'}

It works only when you provide the fine-tuned model using gemini-1.5-flash-001-tuning.

1 Like

Hmm interesting. It was working for me with gemini-1.5-flash earlier though.

Is there any documentation on this method or the endpoint? Currently the API reference only shows models.get and models.list.

Let me get back to you on this.

1 Like

Hey @sps , Could you let us know the use case for the client.models.update() function in the base model??

Hi @GUNAND_MAYANGLAMBAM

I was using the client.models.update() method to create a model that’s configured for structured outputs in a predefined format:

system_message = "Some system message"
generation_config = types.GenerateContentConfig(
    temperature=0,
    top_p= 0.95,
    top_k=40,
    max_output_tokens=8192,
    response_schema= some_pydantic_schema,
    response_mime_type="application/json",
    system_instruction=system_message
)

model = client.models.update(model="gemini-1.5-flash", config=generation_config)
...

response = await client.aio.models.generate_content(model=model, contents=prompt_content)

I found that the above code delivered responses much faster and more reliably, similar to the performance on aistudio.google.com. This was an upgrade over what I was getting with:

response = await client.aio.models.generate_content(
    model="gemini-1.5-flash", 
    config=generation_config,
    contents=prompt_content)
1 Like

Thanks @sps , Checking with the engineering team on this.

1 Like

Got confirmation that client.models.update function is for updating the display name or description of a tuned model. You can’t use it on a base model. Looks like, there was a bug that the engineering team has fixed.

Thanks!!

1 Like

Thanks @GUNAND_MAYANGLAMBAM

Is there a timeframe for when we can expect the API reference to reflect this?

Asking because it still shows only models.list and models.get.

Hi @sps

Did you report this documentation issue by clicking on that button “Send Feedback” top right?
Might be helpful as there are probably different teams working behind the scenes.

Cheers

1 Like

Hi @jkirstaetter

Thanks. I have reported this with the “send feedback”.

1 Like