So every once in a while the model decides to pick a random language with characters when I explicitly instruct it to just respond in English. This is 2.5 thinking with structured output. Any ideas?
Thanks for flagging this. Do you mind sharing the prompt? It will help us reproduce the issue faster and flag it to the engineering team.