Gemini 2.5 Flash (Nano Banana) Auto Aspect Ratio issue - Output image has different Aspect Ratio

Hi there.

I’m generating images (via API and via AI Studio), and I want the output image to be the same size/proportion as my input image. However, the API output (in Auto mode in the Aspect Ratio config) is returning an image with a different size.

My input images as 2048x1024, and the output image is always returning 1472x704 images (not 2:1 proportion like the input image).
Anyone facing similar issues?
Any idea what might be causing this issue?

The API was working perfectly despite exhaustive efforts every AR is 1:1 from today :confused: Vexing - if anyone has a fix that would be much appreciated.

I asked Gemini = Current State (New Backend): If your app is using the updated model (Gemini 2.5-Flash-Image r2), it will indeed default to 1:1. There isn’t an immediate control parameter to change this within the current version of that specific backend.

  1. The “Why”: As discussed, this change was made to simplify the model’s internal architecture, scaling, and memory usage by unifying the latent grid to a single 1024x1024 square. While beneficial for the model’s efficiency, it unfortunately removed external aspect ratio control.

  2. Upcoming Solution: The good news is that the developers are aware of this limitation and are working on a fix. The outputDimensions parameter is explicitly mentioned as the upcoming solution to “restore control” over aspect ratios. You’ll need to keep an eye out for announcements regarding this patch and update your application once it’s available.

What you can do in the meantime:

  • Wait for the Patch: This is the most direct and recommended solution. The outputDimensions parameter will likely integrate seamlessly with the generation process.

  • Client-Side Cropping/Resizing (Workaround): If you absolutely cannot wait and need non-square images immediately, you would have to:

    • Generate the 1:1 image from the model.

    • Use image processing libraries (e.g., OpenCV, Pillow in Python, or equivalent libraries in other languages) within your own application to crop or resize the generated square image to your desired aspect ratio.

    • Caveats: This is a workaround with limitations:

      • Loss of Information: If you crop, you’re losing parts of the generated image.

      • Distortion: If you simply stretch/squeeze, the image might look distorted.

      • Extra Processing: It adds a processing step on your end.

      • Quality: The results won’t be as good as if the model generated the correct aspect ratio natively.

  • Check for Other Models/Endpoints: If you have access to different image generation models or API endpoints, check if any of them still offer native aspect ratio control. It seems your “old code” was hitting one such endpoint.

For now, the best strategy is to prepare for the outputDimensions parameter and consider client-side post-processing as a temporary measure if absolutely necessary. :rofl:

The docs explicitly says that:

”The model defaults to matching the output image size to that of your input image, or otherwise generates 1:1 squares”

I’m sending a 2:1 aspect ratio image using Auto Aspect Ratio config (both via API and via AI studio) and the API is not returning a 2:1 image, it’s slightly altered (1472x704 size).

This is not the expected and documented behavior.