Hi, thank you everyone for bringing up the issue. I have escalated it to the internal team.
Hi @Paolo
Please let us know how did you supply the schema. If it is text in a prompt or through model configuration.
Below are some of the observations:
- gemini-2.0-flash works fine with pydantic BaseModel
- Issue persists with gemini-2.5-pro-preview-03-25 for both prompt and pydantic BaseModel
Same issue here. In combination with structured output, I have started seeing responses that begin with ny\n```json\n
If anyone finds it useful, here’s a quick temp solution for it i’ve been using that seems to be safely parsing it for me.
// function to extract JSON substring from a potentially dirty string
export const cleanJson = (dirtyJson: string): string => {
const firstOpen = dirtyJson.search(/[{/);
if (firstOpen === -1) {
console.error(“cleanJson: No opening brace/bracket found in string:”, dirtyJson);
return dirtyJson;
}
// Find the last closing brace or bracket
const lastClose = dirtyJson.lastIndexOf(‘}’);
const lastBracket = dirtyJson.lastIndexOf(‘]’);
const lastIndex = Math.max(lastClose, lastBracket);
if (lastIndex === -1) {
console.error(“cleanJson: No closing brace/bracket found in string:”, dirtyJson);
return dirtyJson; // Return original if no closing found
}
// extract the substring between the first open and last close
const jsonSubstring = dirtyJson.substring(firstOpen, lastIndex + 1);
try {
JSON.parse(jsonSubstring);
return jsonSubstring;
} catch (e) {
console.error(“cleanJson: Extracted substring is not valid JSON:”, jsonSubstring, “Original:”, dirtyJson);
return jsonSubstring;
}
};
Hey, Gemini team!
Now we are getting ‘ny’ string chunk in the json response as well.
Any progress on this issue?
I have the same issue in gemini-2.5-pro-preview-03-25
JSON Schema is provided in Prompt aswell as Config
The same here, consistent issue in gemini-2.5-pro-preview-03-25.
A random “ny” JSON provided before the thinking and final JSON.
On my side I also have a JSON Schema described in the prompt and provided as a config.
I have tried various prompts, but nothing seems to resolve the issue.
In our particular instance, it has resumed normal functionality today.
Updated: It’s still not functioning correctly across all calls.
Some initial test shows it works better. But I’m still getting random ones that now include the word shame. Was getting ny
all the time.
No word from Google? Seems like this is a big enough issue where they’d address it and mention there’s a fix on the way, etc.
Indeed, Michael_Francis, it does falter on occasion. It exhibits considerable instability, fluctuating with each API call.
I suspect there’s a bug on a subset of distributed servers. I don’t see Cursor having problems with 2.5. They must have good servers.
thanks for getting back to me.
I’m forced to use text in the prompt to define my desired JSON structure because combining response_schema with tools for function calling isn’t supported and triggers API errors.
My structure is an outer JSON that includes a nested YAML configuration generated by the function call. And it worked GREAT just until a couple of weeks ago.
Unless I’m overlooking an alternative, prompting seems to be the only way to achieve this currently. I’d prefer to use model configuration with BaseModel for the top-level JSON as you suggested but doesn’t seem supported for a case like mine.
I’m also fine going back to original behavior where no ```json block was added when prompting. There is no reason to force it on us. I added strong prompt instructions to prevent that, but they are completely ignored. It seems “hardcoded” and so it might be an easy fix.
I’ll send you a DM with more details on my implementation and use case.
Thank you so much for the help - This model is fantastic and it’s a pity we can’t use it for this basic issue.
It started working normally again this afternoon, after having the same problem as above for 2+ days. I hope that wasn’t random and that they actually fixed something.
Has it started working for you or anybody else?
Mine was never random. It worked 100% of the time since the release date, then 0% of the time with the “ny…” error, and then 100% of the time again today.
Hi Everyone,
We would like to inform you schema through model configuration works fine for both gemini-2.0-flash and gemini-2.5-pro-preview-03-25 models.
However, the issue still persists with schema as text in the prompt.
When is this expected to be fixed?
The issue seems clear by looking at the (leaked) system prompt for Gemini. It explicitly says to add ```json, so probably it’s happening the same for the API.
We all would like to keep using Gemini models, but we can’t have an app broken for almost a month now.
Any plan to fix it or should we switch back to OpenAI and Anthropic?
did it get resolved for you?
I still get every single response starting with ```json
This is so bad
This was escalated 14 days ago. Nothing changed. We still get every message starting with json when we explicitly ask NOT to. You have markdown in the answer and the model response is truncated. Nobody can seriously use this in production - it breaks at every other message. This is a simple bug to fix, your internal prompts force any code block to start with
json or xml or
txt. Just remove that and we are good to go. it used to work just fine!
2.5 Flash preview doesn’t seem to have this issue, which is great.
actually I’m using 2.5 Flash and having a very similar issue, my json requests are all of a sudden full of “Modified by moderator” only through the api, through ai studio its fine.