Downscaling degradation issue started happening today with NB2

I’ve been using the new NB model gemini-3.1-flash-image-preview every day extensively.

As of today I noticed that uploading a 4K resolution image to gemini it’s destroying the quality by resizing it down to 2K before re-rendering it. I introduced a fix by resizing images to 2K before sending them to gemini and the results are back to normal. I can confirm by showing you the output yesterday before this began, today before I applied the downscale fix, then after i added the fix. Did Google change how it handles large resolution images that are uploaded to Gemini? If you zoom in you will see how bad the aliasing is. Also after more testing the quality is still degraded. Vertex gives me much better results.

We’ve been building a real estate photo editing application that uses gemini-3.1-flash-image-preview for image editing tasks (twilight conversion, virtual staging, etc.). We noticed a consistent quality difference in the output images depending on which API endpoint we use to call the same model with identical parameters.

The issue:
When using the Google AI API (@google/genai SDK with apiKey), the output images show noticeable artifacts on fine geometric details — angled siding lines on houses appear jittery/wavy, brick patterns get garbled, and roof shingles lose their clean lines. This is especially visible on architectural details that run at diagonal angles in the photo.

When using Vertex AI (either through Vertex AI Studio or the same @google/genai SDK with vertexai: true), the same model with the same prompt, same image, and same parameters produces noticeably smoother, cleaner output — the angled siding lines are straight, brick patterns are preserved, and fine details remain sharp.

Our setup (identical for both tests):

  • Model: gemini-3.1-flash-image-preview

  • Temperature: 1

  • Top-P: 0.95

  • Thinking level: High

  • Output resolution: 2K

  • Same prompt, same input image (no preprocessing, sent as-is)

  • Output: raw PNG (no JPEG conversion)

What we ruled out:

  • Input image resolution (tested at both 2K and 4K)

  • Temperature (tested 0.2 through 1.0)

  • Guidance scale (removed entirely)

  • Top-P settings

  • Prompt differences (used identical prompts)

  • JPEG compression artifacts (compared raw PNG output)

The only variable that changed the quality was switching from the Google AI API endpoint to the Vertex AI API endpoint. Same SDK (@google/genai v1.31.0), same model name, same config — just initializing with vertexai: true, project: '...', location: 'global' instead of apiKey: '...'.

Our solution:
We switched from:

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API });

To:

const ai = new GoogleGenAI({

vertexai: true,

project: 'your-gcp-project-id',

location: 'global',

googleAuthOptions: { credentials: serviceAccountKey }

});

Everything else stayed the same — same model name, same config object, same prompt structure. The output quality immediately matched what we were seeing in Vertex AI Studio.

Question for the team: Is this a known difference between the two API surfaces? Are they using different model serving infrastructure, quantization, or post-processing? For applications where fine detail preservation matters (real estate photography, architectural images), this quality gap is significant enough to require using Vertex AI over the Google AI API.