False Positive in Image Safety Filters - Blocking Original Artist's Content (Sovereignty Project)

Hello,

I am a Google Developer and Artist working on a project titled “Giesta”. I am facing a persistent and unjustified blocking issue in AI Studio (Nano Banana) with the error: “Unable to show the generated image”.

The problem is that the safety filter is triggering false positives on my own original artwork and photographs of the “River Minho” estuary (Portugal/Spain border). Even after removing my artist signature from the source images to avoid copyright flags, the system continues to block the generation.

I am attempting to merge my authorized family lineage portraits (Poet Gavinho Pinto) with regional landscapes using tactile textures (impasto, particles). The AI seems to be confusing “complex artistic texture” with “sensitive content”.

As the rightful owner of these images, this “silent filtering” is preventing the development of a project focused on regional heritage and soul-driven art (Soul over Plastic).

Could the team look into how the safety tuner handles “mixed media” and “high-texture” prompts to avoid suffocating legitimate creative work?

Thank you, António Nunes

1 Like

I am reporting a new and even more frustrating case of “silent censorship” in AI Studio (Gemini 3 Flash / Nano Banana).

As shown in the attached image, the system blocked a prompt designed to merge hardware (a Router Radar) with botanical and mineral textures (oxidized gold and salt-water crystals). This is a core part of my “Project Giesta,” which explores regional heritage and industrial aesthetics.

The model is failing to distinguish between “complex material textures” and “prohibited content.” By over-filtering terms related to “soul,” “human-like relief,” or even technical “hardware” descriptions, the AI is effectively killing artistic freedom and ethical creativity.

This “censorship by laziness” is a barrier to professional developers who use IA for high-level conceptual art (Soul over Plastic). I urge the safety team to review these triggers. We need a system that analyzes intent and context, not just a list of banned words.

Technical Reference: Patente 136 AN / Project Giesta.