I use AI Studio with Gemini to write stories. For this purpose I can use images, pieces of texts or even image prompts for “Standard Diffusion” as my starting point. Which works fine.
But the thing is that I need Gemini to clean up the language or image on some occasions. For example, I might have an image showing a nude woman, and Gemini should then put her in a bikini. Or I have an SDXL prompt that has an explicit word, and Gemini has to filter it away. Basically, Gemini has to filter my input so it generates PG-13 rated content out of NC-17 content.
Yes, set the safety settings to NONE should do the trick, although AI Studio tends to pass them incorrectly. (Or so it seems.) So the problem is that Gemini rejects my input and thus generates no results because of “safety concerns!”…
Which is annoying as I want to use Gemini to remove those safety concerns in the first place! Gemini should be able to handle this and clean up the input, in my opinion. The safety should be in the response, even if the request has unsafe words. Blocking any unsafe content in the request makes no sense to me if Gemini is used to clean up things…
I am occasionally dealing with text inputs of 500 words or more that might contain bad language, violence or other “unsafe” content. I wrote it in the first place.
But it needs to be cleaned up, like a professional editor would do, to make the writing more suitable for a younger audience.
The same applies to images and artwork, as I use Poser Pro and Eon Vue to generate images of pretty women, but don’t always include clothes. I want Gemini to describe the scene and add clothes in that case, not block the input. Because Imagen could then use the resulting description with clothes to make a similar image, yet more decent.
So why is Gemini blocking any “unsafe” requests?
Hello @Katje,
We understand that safety filters can sometimes feel frustrating or unnecessarily strict especially on the input side. However as you know, input plays a critical role in autoregressive models and these safeguards are in place to help prevent potential harm or misuse. Safety remains our top priority which is why the filters are designed to be stringent, though we always strive to remain transparent about them.
Also we truly value your insights, so please keep sharing your feedback, we love hearing from you. ![]()
Hi, @Lalit_Kumar
I understand safety, but it would help if the AI at least provides a better indication of what would be the unsafe part. Especially when I set all safety settings to “NONE” before calling the API.
But then there’s always the situation where I want the AI to filter my content to indicate if it’s safe enough. For images I can use Google Vision. But with text input, I would need Gemini to evaluate the text and it would be nice if it gave more details than just “Unsafe”.
The Natural Language API is an option, but that doesn’t really have a safety flag.
Perspective API might be an option too, though. But that’s more for assisted moderation, not a fully automated solution.
The thing is, I’m not moderating comments but want to clean up SDXL prompts to generate “clean images” using Stable Diffusion. (This is for web content on sites I develop.) This is why I start with an image to ask for a prompt to recreate it, but cleaner. And this often involves images with people on it, as I might make a website for a hotel at a beach or whatever.
Give them clothes and try again. ![]()
Try translating the word “black” into other languages for education purposes. Even if it is in a sentence that says: “The cat is black” and you give it all of the context and are asking it to translate a number of words. It’s impossible for some languages! Context doesn’t help. Now I’ve been trying to generate images for “moss” and it is not happy with that! I can only surmise what it is “thinking”. I don’t know whether to laugh or cry! I do feel that Google takes things too far in terms of “censorship”.
Oops, didn’t notice the cursing, sorry! I’d like to report that I have not experienced any further censorship since.

