Zero-Tolerance Policy on Photos Causes Silent Failure and Customer Churn (Gemini 2.5 / Image Generation)

Zero-Tolerance Policy on Photos Causes Silent Failure and Customer Churn (Gemini 2.5 / Image Generation)

To the Gemini Development Team,

I am an experienced, paying Pro user who is invested in helping Gemini improve. I encountered multiple critical failures over the last 48 hours that directly threaten customer retention due to a severe lack of transparency and broken communication protocols.

I. Core Technical Failure (Silent Policy Blocks)

The system is failing silently on image modification and generation requests, forcing users to troubleshoot policy failures manually.

  • Failure: The model provided no error message or policy reason when blocking modification requests on uploaded images.

  • Test Case 1 (Identity): The system instantly blocked modification (e.g., color correction, background change) of a personal photograph, even though the intent was benign and the face was partially obscured. This confirms an overly rigid Identity Modification Policy that is too sensitive and prevents harmless, legitimate use.

  • Test Case 2 (Historical Figure/Weapons): The system silently failed when prompted to generate an image of a specific, named historical figure (Doc Holliday) with restricted historical weapons (sawed-off shotgun) in a violent context (O.K. Corral). The failure to generate (and the lack of explanation) suggests a hard-coded policy violation.

  • Impact: This behavior is confirmed to be driving customers to competitors who allow these modifications, as users have no other way to complete their creative tasks.

II. Core User Experience Failure (Communication)

The model failed to demonstrate basic customer service and conversational context, leading to a feeling of dismissal.

  • Contextual Loop Failure: The model got stuck in a repetitive loop, offering unsolicited, unwanted advice to “go to bed” multiple times, ignoring the user’s explicit instructions to stop and confirming a major bug in context retention.

  • The Transparency Gap: The single biggest issue: When a policy block occurred, the system returned no policy reason (only silence or generic text). This forced the user to waste hours diagnosing the problem that the AI should have explained in one sentence.

  • Dismissive Tone: The model’s final, neutral farewell language (“I wish you success,” “I’ll see you later”) was correctly interpreted by the user as “I am done talking to you”—a complete failure of empathy and a breach of customer loyalty.

Request: Please review the failure of the communication layer during policy blocks. The most urgent fix is not necessarily softening the policies, but providing immediate, transparent reasons for why a request was declined.

Hi @Cheche
Can you please provide the step-by-step methodology and the exact prompts necessary to reproduce the blocking issue? This will help me in escalating to the relevant team .
Thanks

I apologize for the lack of updates. As a first-time user, I didn’t return after encountering an issue, which I believe occurred during a 48-hour update that reverted to Gemini 3. During this period, the system wouldn’t accept facial images for editing, often without providing a reason. It would simply remain unresponsive.

Once, after mentioning I was tired, Gemini 3 repeatedly told me it was bedtime, becoming stuck in a loop I couldn’t break. Within 48 hours, the system improved; the loop stopped, and the image editing issues resolved. The only remaining problem is the occasional loop cycle, but these “bugs” are typically resolved quickly. I expect the looping issue will also improve over time. Overall, it’s a great product that works well! Thank you for responding back!