Starting today, requests to Gemini 3 Preview using the OpenAI compatibility interface fail when reasoning_effort (thinking level) is set to "medium". The same request succeeds with "low" and "high".
This appears to be a model capability/config regression or a validation bug specific to the "medium" level.
Yes you are correct. I have tasted on my end using “reasoning_effort”: “medium” and encountering the same error. According to the Gemini documentation, the Pro model supports only low and high reasoning levels, whereas the Flash model supports low, medium, and high.
I recommend please go through this document, as it provides clear and more detailed information.
This looks like a bug presented as a feature. I mean, the Gemini API has a thinkingBudget field that accepts a numeric value. So how is it possible not to support the medium thinking level with an 8,192-token thinking budget?
In any case, thank you for taking the time, @Shivam_Singh2.
The problem is that each models has a different level of adherence to these kind of parameters. Initially, we wanted to provide full flexibility via the thinking budget, but it didn’t work super consistently before. So we opted to go with thinking levels. In the case of Pro, we did a bunch of eval work and medium didn’t produce good results nor was it consistent. I agree in general that this is basically a model bug at the moment. Hoping it gets fixed in the next rev of the model.