Hi Google team,
I’m using a chatbot, Janitor.AI through proxy using the model Gemini 2.5 Pro. The problem isn’t me trying to bypass policy. I fully understand and respect Google’s TOS around NSFW and harmful content. That’s not the issue. The issue is that Gemini 2.5 Pro’s filtering has become so hypersensitive that it blocks completely normal, safe, and SFW content. I’ll be roleplaying, writing harmless fictional dialogue, or even just basic storytelling, and then suddenly the output gets cut off with the dreaded “No content received from AI Studio due to filtering” message. That makes no sense at all. It’s not protecting anyone, it’s just breaking normal usage.
And it’s not just me. Some people have even been banned and their API key. Despite not breaking TOS at all. Legitimate users who follow the rules end up losing access completely, while all they did was use the tool for safe, fictional writing. That’s terrifying, because it shows that even when you play by the rules, the system can still turn on you. If the model keeps wrongly flagging safe content and outright banning people for things they didn’t do, how is anyone supposed to trust it?
The filters are supposed to keep people safe. Instead, they’re punishing safe users. The system blocks harmless, fictional writing while claiming it’s unsafe, which is the opposite of what a safety filter should do. If your model can’t even tell the difference between dangerous content and normal fictional roleplay, then the moderation system is not “safety,” it’s just broken censorship.
On top of that, the overall experience is already unstable. The constant 429 errors, random cut-offs, and inconsistent quotas make Gemini 2.5 Pro feel unreliable. One day I can go past 50 RPD, the next day I’m throttled with limits at random, with no explanation or consistency. It’s like the system is punishing normal usage harder than it punishes actual abuse. That is unacceptable for something branded as a Pro model.
Here’s the real issue: I am not asking for fewer rules, I am asking for rules that make sense. Fictional, SFW roleplay is not harmful. Writing harmless, made-up dialogue is not dangerous. Yet Gemini 2.5 Pro keeps cutting me off as if I’ve violated policy. That means your filter is not doing its job—it’s failing to distinguish between real violations and harmless content. It blocks first, and never asks questions. That ruins creativity, disrupts workflow, and destroys user trust.
Google says these filters are about safety. But when safe content gets flagged as unsafe, that is not safety. That’s an overreaction so extreme that it stops making sense. The result is that legitimate users who follow the rules are being silenced, while the actual abusers will always find a way around filters anyway. That’s why people like me are frustrated, not because we want to break rules, but because we’re playing by them and still being punished.
If Google wants Gemini to be taken seriously, the moderation system needs to be smarter, not harsher. It needs to actually protect people from real risks, not block roleplay, storytelling, and safe, fictional text. Right now, the filter doesn’t protect anyone, it just shuts down normal use cases and makes the model unreliable for the very people who want to use it properly.
All I’m asking for is common sense. Fictional, SFW roleplay is not a threat. Harmless storytelling is not harmful. And yet Gemini 2.5 Pro keeps treating it like it is. That’s broken moderation, plain and simple. Until this is fixed, the experience will continue to feel unreliable, frustrating, and self-defeating.
In short: safe content should not be treated like a policy violation. If it is, then the system isn’t protecting anyone. It’s just blocking normal users. That’s why so many of us feel like Google’s filters are overreacting to the point where they’ve stopped serving their purpose.