sps
March 7, 2025, 7:58am
1
I am encountering a 404 error when utilizing the client.models.update()
function within the new Python genAI client. The code was working fine a day before.
Could you please assist in resolving this issue? Further details or logs can be provided if necessary.
Thank you for your time and assistance.
1 Like
Hey @sps , Seems like an intermittent issue. I just checked from my end, and the update() function is working. I used SDK version 1.5.0
.
Let me know if you’re still facing the issue.
1 Like
sps
March 10, 2025, 5:17pm
4
Hi @GUNAND_MAYANGLAMBAM
Thanks for testing. However, I’m still getting a 404 error on my end for the google-genai
client (currently on version 1.5.0
):
Exception has occurred: ClientError
404 Not Found. {'message': '', 'status': 'Not Found'}
File "/directory/test/app.py", line 75, in <module>
model = client.models.update(model="gemini-1.5-flash", config=generation_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
google.genai.errors.ClientError: 404 Not Found. {'message': '', 'status': 'Not Found'}
It works only when you provide the fine-tuned model using gemini-1.5-flash-001-tuning
.
1 Like
sps
March 11, 2025, 7:16am
7
Hmm interesting. It was working for me with gemini-1.5-flash
earlier though.
Is there any documentation on this method or the endpoint? Currently the API reference only shows models.get
and models.list
.
Let me get back to you on this.
1 Like
Hey @sps , Could you let us know the use case for the client.models.update()
function in the base model??
sps
March 17, 2025, 12:21pm
13
Hi @GUNAND_MAYANGLAMBAM
I was using the client.models.update()
method to create a model that’s configured for structured outputs in a predefined format:
system_message = "Some system message"
generation_config = types.GenerateContentConfig(
temperature=0,
top_p= 0.95,
top_k=40,
max_output_tokens=8192,
response_schema= some_pydantic_schema,
response_mime_type="application/json",
system_instruction=system_message
)
model = client.models.update(model="gemini-1.5-flash", config=generation_config)
...
response = await client.aio.models.generate_content(model=model, contents=prompt_content)
I found that the above code delivered responses much faster and more reliably, similar to the performance on aistudio.google.com . This was an upgrade over what I was getting with:
response = await client.aio.models.generate_content(
model="gemini-1.5-flash",
config=generation_config,
contents=prompt_content)
1 Like
Thanks @sps , Checking with the engineering team on this.
1 Like