gemini-2.0-flash-thinking-exp-1219 tends to switch from English (default) to German or French language in the middle of its output.
FYI I’m based in France so yeah French IP address but my Windows system, my Zed code editor and terminal are all in English language.
Ah, when Gemini has trouble understanding someone because of their accent, it sometimes decides to switch to a language in which the user might be fluent, apparently.
Gemini has trouble understanding me through my phone mic in combination with my terrible Swiss German accent. Using the video streaming microphone and wearing headphones and talking with Gemini on my PC solved the issue.
Haha, I wish I were fluent enough to speak up to the AI model.
I’d like a Google team member to acknowledge the issue when I simply type in good English and the model switches in the middle of its thoughts.
Oh. You mean the text interface switches… hmm. That’s odd. You probably can compensate for that to some degree with prompt engineering, instructing it to always keep talking English and not switch languages, in the systems instructions/init prompt, when you instantiate the model
Yep, I use aider so I can add chat-language: English
in the config yaml file.
But anyway, Google needs to know about this bug.
Not happening with 4o, o1 and Claude.
LLMs are highly nondeterministic, so I’m not sure whether calling odd behaviors of LLMs bugs is an adequate description.
I mean, you can’t even align them reliably as Anthropics just recently demonstrated with their paper on how LLMs can fake alignment.
I think it’s rather a bug with the agent scaffolding around the LLM.
The context prompt should make sure that the LLMs keeps using one specific language, no matter what LLM you use.
Something, you apparently now have solved by using the “chat-language: English” flag.
I guess, we need a better category for collecting observed odd LLM behaviors so that folks are aware of potential pitfalls when designing new AI solutions with Gemini