My AI-generated image looks like Jennifer Lawrence?

So, this was a fun experiment. In trying to see how well Gemini can recognize things from an image, I started feeding it a few images and asked about what it could detect. And it is pretty detailed as it discovered anklets, earrings, clothes and objects in the background. A pretty complete list also.

But then I asked Gemini if it recognized the person in the AI image that I had made a few hours earlier. And apparently, “While the character in the images shares some physical characteristics with Jennifer Lawrence—such as a similar face shape and eye set—it is important to clarify that she is not actually the actress.

Well, okay… But still, there are similarities between the two faces. Which is weird as it was made with Gemini. So I checked another image of a completely different woman. Again, there were similarities.

But an AI-generated image of a Naga, the likeliness was that of Angelina Jolie or Megan Fox. So I tried asking “Is she like someone famous?” with a few more generated images. But Jennifer Lawrence pops up regularly for some reason. Even for a face with dark skin…

But then I tried a new prompt, and quickly I got the names of other well-known actresses…

So why does this matter? Well, it tells something about the dataset that Gemini is trained with. And the dataset used by Imagen to generate these images. And the problem is that if a generated image looks too close to someone famous (like Jennifer Lawrence) then the AI might trigger the safety system, thinking that we want to create fake images of that person.

Now, I just want to generate images for characters in stories that I want to write. For this purpose I use AI to generate template sheets of the characters I want. These are images showing a single person from four sides: front, left, right and back. Using these references, I can tell the AI to generate images in specific settings or situations and get a pretty accurate result that looks like the reference. Sometimes I need to take a few more steps with the AI to get an even better result, but I get results…

But not always. Sometimes the safety settings give me a vague error like “Other”, “Prohibited” or “Safety” while I’m just trying to make SFW images from my references. And these references are also pretty safe. And that seems to cause problems…

Because the safety checks think it’s a famous person because it looks a bit like someone, they would block my requests and will refuse to work, unless I make additional changes to the prompt and system instructions. And that’s annoying.

Is there any way to stop this annoying behavior?

Hi Katje,

Gemini’s training data may contain many images of celebrities, so generic prompts like ‘beautiful woman‘ often accidentally generate faces resembling famous actors.

This triggers the API’s safety filters designed to prevent deepfakes.

To stop this, use mixed ancestries (example: half-Estonian, half-Peruvian etc.) and specific facial imperfections in your prompt to force the model to create a unique face that doesn’t trigger the RPI block.

Please refer here for best practices on prompt designs.

Thank you!

1 Like

Well, in my case I started with a picture of a friend of mine, from behind. Her face wasn’t visible. Askied to rotate her and create a face and POOF, Jennifer Lawrence. And it refused to handle this image for a while…

I know the safety filters are preventing deepfakes but then they should not be generating images that are almost deepfakes to begin with. When the AI generates an image that looks like a celebrity and then refuses to use that image for further adjustments then something is wrong with the algorithm that generated the first image.

I also know that images generated by Imagen have a watermark in them to tell they are AI-generated. Thus the AI knows it has generated the image before and thus it is not a real image of a celebrity.