Google AI Developers Forum
Subject: Critical Usability Failure: Model Over-refusal and the “Pen vs. Author” Problem
To the Gemini Development Team,
I am writing to express my extreme dissatisfaction and concern regarding the current state of Gemini’s image generation filters. As a researcher and community organizer, I have found that the model’s “safety” guardrails have reached a point of systemic over-refusal, rendering the tool unreliable for legitimate, symbolic expression.
The Core Issue:
I recently requested a stylized image of a “shadow of a king having a crown knocked off.” This was refused based on “public figure” safety guidelines. This is a massive overstep for several reasons:
The “Pen vs. Author” Fallacy: A developer is not responsible for what a user creates with their tool, any more than a pen manufacturer is responsible for a controversial book. By trying to “protect” yourselves from hypothetical PR risks, you have effectively seized control over the user’s thoughts and intent.
False Positive Triggers: My prompt named no one. It used a universal, historical metaphor. If the model is programmed to assume that every “king” is a reference to a specific modern politician, the model is no longer an objective tool; it is an ideological editor.
Unreliable Search & Research: Google’s foundation is built on being the world’s most trusted search engine. However, if I cannot trust this AI to render a simple shadow without it “sanitizing” my intent, I can no longer trust the platform to provide unfiltered, truthful data in any other context. You are shooting yourselves in the foot by sacrificing the utility of your main product at the altar of “risk assessment.”
Legal & Policy Misalignment: As of March 2026, the National Policy Framework for AI specifically warns against “ideological overreach” and the use of safety as a shield for censorship. Furthermore, the legal protections of Section 230 were intended for neutral platforms—by acting as an editor of my prompts, you are moving away from that neutral status and creating the very liability you claim to fear.
Conclusion:
This level of censorship is an insult to the intelligence of your users and a violation of the spirit of free expression. A tool that is too “safe” to be used is a tool that is useless. I urge the team to recalibrate these filters to respect user agency and return to being a neutral provider of technology.