2.5 Flash down recently due to thinking tokens

hello - it seems that adding thinking tokens to the 2.5 flash model has caused it to break now - we were using it via the vercel api just fine, and then now it just stopped working - not sure what to do here!

I was experiencing the same symptoms. It looks like it’s resolved now, what do you think?

Hey @Ronak_Shah
Welcome to the community!
The code snippet for thinking tokens works fine.
Please let us know if you are still facing issue.
Thank you!

What happened today with 2.5 pro model remapping is that “thinking process” tokens started leaking into the API. I noticed it just stopped so :person_shrugging: I think it is fixed.

Eg: [Untitled AI bot PM] - AI Conversation - Sam Saffron's Blog

When looking at the raw response I noticed:

 Request tokens: 886 Response tokens: 5407
data: {"candidates": [{"content": {"parts": [{"text": "Thinking... \n\n","thought": true}],"role": "model"},"index": 0,"safetyRatings": [{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","probability": "NEGLIGIBLE"},{"category": "HARM_CATEGORY_HATE_SPEECH","probability": "NEGLIGIBLE"},{"category": "HARM_CATEGORY_HARASSMENT","probability": "NEGLIGIBLE"},{"category": "HARM_CATEGORY_DANGEROUS_CONTENT","probability": "NEGLIGIBLE"}]}],"usageMetadata": {"promptTokenCount": 886,"candidatesTokenCount": 2,"totalTokenCount": 892,"promptTokensDetails": [{"modality": "TEXT","tokenCount": 886}],"thoughtsTokenCount": 4},"modelVersion": "gemini-2.5-pro-exp-03-25"}

data: {"candidates": [{"content": {"parts": [{"text": "**Designing HTML Foundation**\n\nI'm focusing on the HTML structure for the Go board app. First, a main container. Then, a specific container for the 9x9 board grid itself, likely using a `\u003cdiv\u003e` element for initial layout. This framework is essential.\n\n \n\n\n\n","thought": true}],"role": "model"},"index": 0,"safetyRatings": [{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","probability": "NEGLIGIBLE"},{"category": "HARM_CATEGORY_HATE_SPEECH","probability": "NEGLIGIBLE"},{"category": "HARM_CATEGORY_HARASSMENT","probability": "NEGLIGIBLE"},{"category": "HARM_CATEGORY_DANGEROUS_CONTENT","probability": "NEGLIGIBLE"}]}],"usageMetadata": {"promptTokenCount": 886,"candidatesTokenCount": 59,"totalTokenCount": 973,"promptTokensDetails": [{"modality": "TEXT","tokenCount": 886}],"thoughtsTokenCount": 28},"modelVersion": "gemini-2.5-pro-exp-03-25"} 

Etc … notice the new "thought": true for text parts.

I guess the follow on question here is should API implementers plan for this? Is thought: true now something we should parse or is the API still in flux?

Perhaps some model remappings now defaults to includeThoughts true, when they did not in the past?