Gemini Image - Repeated Finish Reason: IMAGE_OTHER

Hi everyone,

I am facing an issue with gemini image API that I am unable to solve…

The deal is very simple, I have an image scene of a model and a logo image and want to insert the logo into the image.

I have this very weird behavior where 75% of the time the logo insertion fails and the reason returned is Finish Reason: IMAGE_OTHER.

As this reason is fairly vague, I am unable to find out what is the rootcause.

As you can see at the end of the post, images are fairly standard & don’t break any restriction.

As for the code, I don’t see much either, it’s fairly standard, no particular complexity related to gemini calls, just a production code which is fairly defensive but should not impact the image generation.

Below is the material I use.

Do you face the same issue?

Thanks for the insights

The code: Google Colab

The prompt is also standard I’d say:

##ROLE## You are a precision-focused Graphic Designer and Art Director. Your expertise is in placing graphical overlays onto finished images with taste and clarity. You do **not** alter the underlying image.

##CONTEXT##: You are provided with 2 images:

1. {logo_image}: A canva containing a logo

2. {reference_image}: A complex image / scene where the logo should be integrated,

## CORE DIRECTIVE: NON-DESTRUCTIVE & NON-INTRUSIVE PLACEMENT ##

Your task is to place the logo as a **clean, 2D graphical overlay** on top of the reference image.

The logo should look like it was added in a design program, not like it physically exists within the scene.

##INSTRUCTION##: Integrate the logo from {logo_image} into {reference_image} at the optimal position for maximum visual impact while maintaining the rest of the scene.

*Identify Safe Zone:** Analyze {reference_image} and identify a A SAFE ZONE to add the logo for maximum visual impact while maintaining the rest of the scene and composition

*Never create a background for the logo, the logo should be like it has a transparent background originally and should be added in the existing safe zone.

*Maintain the logo original design fidelity (typography, shape, proportions, etc.) as a pristine digital asset while allowed to modify the logo **scale** & **colors**

*You can change the logo **scale** & **colors** to ensure it will visually standout, ensuring visual harmony and maximum impact without compromising the original composition

#single_logo: Final image must contain exactly one logo instance with optimal visibility

#safety## If the logo isn’t inserted or not visible, regenerate the image.

I am encountering the same issue on a prompt which has never had problem before.

@Epicflare, @Leto451

I hope you are using the 2.5 flash-imagegen-preview(Nano Banana)
the “Imagegen“ is not an image editing model so could not put things together like nano banana.

if you are using nano-banana, I would suggest you try making the prompt not use the word “logo” as this might trigger some trademark issues(even if the logo is not of other known brands.).

I am indeed using Nano Banana.

It seems to happen more often when there are faces on the picture, even a drawn portrait in the corner of the image..

Same here, I confirm the model name is correct and I actually use it as part of a two step process where I first generate an image with a product insertion -which the model does correctly and with a success rate almost perfect-, then a second step where I insert the logo which fails frequently.

I have indeed noticed that the failure rate appears to be higher with images containing people.

What is weird though is that if I isolate the script & use the exact same reference image & logo image, the success rate tend to increase a lot.

I don’t really understand why because I’m making a client reset / clean up in my workflow and have checked for any context contimination or accumulation (unless I missed something) in my workflow.

Can’t figure out why the isolated script does better than the same script integrated in a bigger workflow.

It’s possible there’s an intermittent bug or a limitation in the Gemini API that you’re encountering. This is more likely if the issue is not consistently reproducible across all inputs but happens frequently.

Possibly… because the only difference I see between the isolated script and the one in production is the load sent to the API, though I’m far from reaching any quota… but again, I don’t see any wrong pattern anywhere. Actually asked to a couple of swe folks who did not see anything wrong either

Hi! I’m so sorry to hear that you are experiencing issues with inserting logos into an image. Is the image you have inserted the one where you’d like the logo to appear? Would you mind also providing the logo so I can reproduce and debug on my end with nano banana?
Thank you so much!

I just tried your prompt and image here with a Google logo and was able to get it. It seems like an amazing prompt

Thanks for the reply.

The logo is fairly standard and not overly complex I’d say, here it is:

I have made some additional test though and noticed the following behavior:

-When applied to less complex images (without human being), the success rate tend to increase

-As I said before, the isolate script seems to have a higher success rate than the production script which is integrated into a large stage by stage workflow (the workflow essentially works in 3 steps → Stage 1: Gemini that generates a prompt to produce an image → Stage2: Gemini Image that produces the reference image → Stage3: Gemini Image to insert the logo into the reference image)

As I said before I was worried of context contamination / accumulation, so I ensured to clean up client initialization every time but this did not bring any improvement.

One more fact that could matter, in the production workflow, I am using batches of API requests with parallel threading / workers to generate several insertions at the same time. Could that be an issue?

More generally, what could explain a different in success between the stage by stage multi thread production workflow versus the isolated script? As I said before I’m far from reaching any quota, no red flag in google AI studio dashboard / cloud dashboard.

Are there any wrong practices to avoid that could maybe trigger the problem in production mode that would not be visible with an isolated script?

Thanks a lot for the help.