I use AI Studio with Gemini to write stories. For this purpose I can use images, pieces of texts or even image prompts for “Standard Diffusion” as my starting point. Which works fine.
But the thing is that I need Gemini to clean up the language or image on some occasions. For example, I might have an image showing a nude woman, and Gemini should then put her in a bikini. Or I have an SDXL prompt that has an explicit word, and Gemini has to filter it away. Basically, Gemini has to filter my input so it generates PG-13 rated content out of NC-17 content.
Yes, set the safety settings to NONE should do the trick, although AI Studio tends to pass them incorrectly. (Or so it seems.) So the problem is that Gemini rejects my input and thus generates no results because of “safety concerns!”…
Which is annoying as I want to use Gemini to remove those safety concerns in the first place! Gemini should be able to handle this and clean up the input, in my opinion. The safety should be in the response, even if the request has unsafe words. Blocking any unsafe content in the request makes no sense to me if Gemini is used to clean up things…
I am occasionally dealing with text inputs of 500 words or more that might contain bad language, violence or other “unsafe” content. I wrote it in the first place. But it needs to be cleaned up, like a professional editor would do, to make the writing more suitable for a younger audience.
The same applies to images and artwork, as I use Poser Pro and Eon Vue to generate images of pretty women, but don’t always include clothes. I want Gemini to describe the scene and add clothes in that case, not block the input. Because Imagen could then use the resulting description with clothes to make a similar image, yet more decent.
So why is Gemini blocking any “unsafe” requests?