Dear Google Gemini team,
I would like to follow up on a previous feedback regarding image generation behavior in Nano Banana Pro, as the issue has continued to persist and further testing has made its scope clearer.
As a long-time Google user working in the lingerie and sleepwear industry, I have consistently used Nano Banana Pro for legitimate fashion and lifestyle image generation. Over time, this tool has played an important role in improving workflow efficiency and reducing the need for physical photoshoots.
However, in recent weeks, non-NSFW fashion and lifestyle use cases have continued to be blocked. Even clearly neutral scenarios intended for legitimate product display, e-commerce presentation, or lifestyle imagery—such as lingerie, swimwear, or everyday clothing—are being rejected. Additional testing with text-only prompts suggests that this behavior may be caused by overly strict IMAGE_SAFETY filtering or false positives.
From a real-world perspective, items such as bikinis, lingerie, sleepwear, and lightweight clothing appear daily across everyday life, vacation environments (beaches, swimming pools, resorts), e-commerce try-on images, and mainstream fashion and advertising media. These scenarios are generally regarded as legal, compliant, and non-NSFW under both consumer and industry standards. Broadly blocking these categories represents a functional regression that affects not only professional users, but also ordinary users.
For many users—regardless of gender—AI visualization tools provide a low-risk, pressure-free way to preview how clothing may look before making purchasing decisions, experiment with styles they may not feel comfortable trying in real life, or generate non-sexual lifestyle content for sharing on social platforms. When these normal use cases are rejected, AI tools no longer enhance productivity or decision-making, but instead create friction and significantly degrade user experience.
It is also important to emphasize that these use cases are not limited to personal scenarios. Across mainstream media, fashion magazines, advertising platforms, and commercial campaigns, images featuring swimwear or bikinis have long been considered standard, legal, and compliant content. When AI-generated imagery is held to a significantly stricter standard than established industry and media norms, it creates a disconnect between model behavior and real-world expectations.
Recent discussions around controversial image generation appear to be focused primarily on clearly high-risk scenarios, such as malicious, degrading, or explicit depictions targeting public figures or celebrities, as well as any inappropriate representations involving minors. These cases clearly require strict identification and regulation. However, addressing such risks through blanket restrictions effectively shifts the cost of edge-case abuse onto a much broader group of normal users and legitimate use cases.
As a further suggestion, it may be worth evaluating whether IMAGE_SAFETY thresholds could be temporarily or partially adjusted to align more closely with the settings in place prior to January 20. Under that earlier approach, safety enforcement appeared to focus more precisely on inappropriate styling involving minors and abusive or degrading image generation targeting public figures or celebrities. Maintaining strict regulation of these high-risk categories while returning to a more targeted enforcement strategy may help reduce false positives affecting ordinary users without weakening essential safety protections.
If the current behavior is the result of a recent safety or policy update, I fully understand and appreciate the team’s efforts to promote responsible AI usage. I sincerely hope the team can consider further refining IMAGE_SAFETY criteria to better distinguish sexualized intent from neutral fashion and lifestyle visualization, align more closely with existing legal, commercial, and mainstream media standards, and restore practical usability for legitimate users.
Thank you for your time and for your continued work on Gemini.