Structured outputs just suddenly started failing

In the last hour, the gemini api suddenly started failing every single time, after succeeding for weeks with the same prompt.

I am using the openai api per the docs for getting structured output.

model: gemini-2.5-pro-exp-03-25

I get the error back: Error generating structured output, primary retry #2: SyntaxError: Unexpected token ‘y’, "ny ```json "… is not valid JSON

So it is suddenly never returning valid json, when using the structured output API as described in the documentation.

This only happens with this model. Flash and others are working fine (with the exact same prompt and schema).

Why did this one suddenly change?

2 Likes

Hey @Dark_Violet , You are right, there seems to be an issue with the 2.5-pro model. Thanks for escalating the issue, I will check with the team on this.

4 Likes

Longer thread here: Gemini 2.5 Pro inserting random text and format tokens around json responses

This has been going on for at least a day or two. Really big bug and causing a lot of people (including us) to move to other models

4 Likes

Yep, had to move everything off of this model. But this was the best and I really need it for a product demo!!
Thanks!

1 Like

This is no longer happening as of this afternoon. Thank you!

I am encountering a similiar issue and now getting [GLOBAL] SyntaxError: Unexpected token ‘)’ every time. I am also using the 2.5 pro version. Ive been working on this for the last 3 days trying not to loose my work, but it has now deleted all charts and graphs which I desperately need. Can anyone make any suggestions

Structured output is still failing as of today, January 3, 2026

I copied and pasted the structured output demo code from the official google docs:

Here is the network response from gemini:

{ “candidates”: [ { “content”: { “parts”: [ { “text”: “{\n\“recipeName\”: \“Chocolate Chip Cookies\”,\n\“ingredients\”: [\n \“2 1/4 cups all-purpose flour\”,\n \“1 teaspoon baking soda\”,\n \“1 teaspoon salt\”,\n \“1 cup unsalted butter (softened)\”,\n \“3/4 cup granulated sugar\”,\n \“3/4 cup packed brown sugar\”,\n \“1 teaspoon vanilla extract\”,\n \“2 large eggs\”,\n \“2 cups semisweet chocolate chips\”\n],\n\“instructions\”: [\n \“Preheat the oven to 375°F (190°C).\”,\n \“In a small bowl, whisk together the flour, baking soda, and salt.\”,\n \“In a large bowl, cream together the butter, granulated sugar, and brown sugar until light and fluffy.\”,\n \“Beat in the vanilla and eggs, one at a time.\”,\n \“Gradually beat in the dry ingredients until just combined.\”,\n \“Stir in the chocolate chips.\”,\n \“Drop by rounded tablespoons onto ungreased baking sheets and bake for 9 to 11 minutes.\”\n]\n}” } ], “role”: “model” }, “finishReason”: “STOP”, “index”: 0 } ], “usageMetadata”: { “promptTokenCount”: 231, “candidatesTokenCount”: 242, “totalTokenCount”: 997, “promptTokensDetails”: [ { “modality”: “TEXT”, “tokenCount”: 231 } ], “thoughtsTokenCount”: 524 }, “modelVersion”: “gemini-2.5-flash”, “responseId”: “bO5YaZD9LYX1juMP99SLiAo” }

As you can see, it’s not respecting the schema, causing a Zod runtime error.

Here’s chatGPT’s summary of the problem:

The model is not respecting your JSON schema: it returns recipeName instead of recipe_name, and ingredients as strings instead of { name, quantity } objects.
Because of that, zod.parse() fails with type and missing-field errors.
You must either fix the prompt to explicitly enforce the schema or change your Zod schema to match the actual output.

I Tried changing the model from 2.5 flash to 2.0 flash and it still fails:

ingredients must be an array of objects { name, quantity }, but the model is returning strings.
Only recipe_name and instructions now match your schema.