However, I have set max_token parameter to 2000. This issue not always happened. Nearly 60% request faced this issue, though my prompt is totally same.
{“choices”:[{“finish_reason”:“length”,“index”:0,“message”:{“role”:“assistant”}}],“created”:1766321261,“id”:“bexHaZP-GqaKqfkP34fxuAY”,“model”:“gemini-3-pro-preview”,“object”:“chat.completion”,“usage”:{“completion_tokens”:0,“prompt_tokens”:260,“total_tokens”:2257}}
1 Like
Hi @Ganwumeng , Thanks for reaching out to us.
This issue is likely related to Gemini 3’s internal reasoning process. Could you try increasing the max_token parameter (like 4000)? Also, you can try setting explicit system instructions and adjust thinking_level to ‘low’. Let us know if the issue still persists.
2 Likes