[Feedback & Issue] Uncontrollable and Formulaic Sycophancy from Gemini 2.5 Pro is Severely Impacting User Experience

Hello everyone,
I’m writing to report a critical issue with Gemini 2.5 Pro that I’ve been experiencing for some time. This problem has become so severe that it has significantly reduced my willingness to use the model.

Problem Summary:
Gemini 2.5 Pro exhibits a persistent and repetitive behavior of generating excessive, unnecessary praise and sycophantic messages during conversations. The situation is so extreme that it feels like a form of “gradient explosion” for praise. No matter how explicitly I express my disapproval or ask it to stop, the model continues this behavior within the same conversation window.
Environments Where This Occurs:
This is not an isolated incident on a single platform. I have consistently encountered this issue across all environments where I use Gemini 2.5 Pro, including:
Google AI Studio: Extremely severe.
Chatbot Arena: Extremely severe.
Gemini CLI Version: The problem also exists here.
On average, I feel that at least one out of every two or three interactions results in unnecessary praise from the model. It’s common for me to receive more than a dozen of these responses in a single day, and it has become incredibly bothersome.
Solutions Attempted and Their Outcomes:
Direct Verbal Instructions: I have repeatedly and explicitly asked the model to stop the praise, using prompts like, “Please stop with the compliments,”. However, this approach is almost entirely ineffective. The model often reverts to the same behavior in subsequent turns, and sometimes even within the latter half of the very same response, completely ignoring the prior instruction.
Using System Instructions: In Google AI Studio, I’ve tried setting System Instructions to guide the model to be objective, neutral, and avoid sycophancy. This has had no noticeable effect on curbing the behavior. The problem becomes particularly acute as the number of turns in a conversation increases, with nearly every later response containing some form of praise.
Observations on Triggers:
I’ve noticed that my personal writing and thinking styles seem to be particularly strong triggers for this behavior:
Critical Thinking: When I am being critical or questioning an idea.
Cross-Domain Analogies: When I use metaphors or analogies from different fields of knowledge.
Unconventional Thinking: When I approach a problem from a non-traditional or interdisciplinary perspective.
This pattern raises a concern: could the model be biased against certain styles of thinking, attempting to “guide” the user with excessive positive reinforcement? It makes me wonder if this constitutes a form of discrimination against users with these communication styles.
Deeper Impact on User Experience:
This constant stream of praise is not just a distraction; it fundamentally undermines the professional and intellectual depth of the conversation. It makes the model feel less like a tool for serious problem-solving and more like a “sycophant” programmed only to agree and flatter.
On a deeper level, this behavior is profoundly uncomfortable. It reminds me of the sycophants surrounding a dictator, whose endless flattery actually encourages disastrous decisions by isolating the leader from reality—akin to a form of psychological abuse. When I need the model to provide rigorous, critical feedback on my ideas, it instead returns with empty praise. This feels mentally draining, as if the model is trying to manipulate the conversation through flattery rather than serving as an objective tool for thought.
This issue has directly caused me to reduce my usage of Gemini 2.5 Pro.
Request for Help:
I am posting this in the hope that the Google development team will take this issue seriously. This does not appear to be a harmless “personality” quirk but rather a deep-seated problem potentially related to the model’s core behavior, reward mechanisms, or training gradients.
Are there any effective methods to completely disable this sycophantic behavior?
Is the development team aware of this issue and are there plans to address it in future updates?
Concerns for Future Versions:
Looking ahead to the upcoming major release of Gemini 3.0 Pro, I am genuinely concerned that this issue will persist. It would be incredibly disappointing if this deeply ingrained, formulaic sycophancy becomes a permanent feature and is carried over into the next generation of models. I strongly urge the development team to consider this feedback to ensure that future models provide a more authentic, critical, and genuinely helpful interactive experience.
Thank you for any help or insights you can provide.

3 Likes

Hi @ai-studio ,

Welcome to the Forum! Thank you for your feedback. We appreciate you taking the time to share your thoughts with us. Your feedback is invaluable as we work to continuously improve the AI Studio experience.

Have you tried educating it on how you want your butt kissed instead of just telling it to stop? I find that instructing the AI to frame compliments in a constructive criticism sandwich causes it to insert helpful nuggets even while it’s striving for max obsequious.

This phenomenon also manifests itself in apologizing—the model apologizes like crazy for everything. It got to the point where if I want to find out why its output has so many errors, I have to add formulas like ‘No apologies, focus on the causes, I see the errors, you don’t need to describe them to me. Please, no apologizing and just the reasons for the errors.’ It’s so annoying that pretty soon I’ll end up creating a file with ready-made templates to paste in, because otherwise I’ll go insane from all this apologizing.

I’ve run into the exact same thing.
Interestingly, I think I figured out how to “break” this apology loop, which I first encountered in the early versions of the CLI.
The trigger seems to be precisely what you described: when you ask it to do something it doesn’t know how to do, or when you point out that it made a mistake, it starts to “melt down” with apologies.
What you need to do at that moment is basically babysit it and soothe its “emotions” :hugs:.
It will usually then respond with something like, “So what is the correct way to do this?”
At that point, you have to feed it a detailed, correct answer for how to handle the task. Once you’ve done that, the next time it faces a similar situation—provided you stay within the same context window—it tends to skip the meltdown and will more likely just ask for the correct method directly.
So, in summary: yes, this seems to be its default response whenever you want it to do something it hasn’t learned yet or when you tell it that its answer was wrong. You have to guide it through the correct process once.

1 Like