Batch API stalls indefinitely with gemini-3.1-flash-image-preview — only gemini-2.5-flash-image works

I’ve been using the Gemini Batch API for bulk image generation and it works great with gemini-2.5-flash-image. I tried upgrading to gemini-3.1-flash-image-preview for improved image quality, but batch jobs never complete.

What happens:

  • The batch is accepted and moves to JOB_STATE_RUNNING

  • completionStats stays undefined — successfulCount never increments

  • The job runs indefinitely with no progress

I’ve tested extensively to isolate the issue:

  • Tried both JSONL file input and inline src array — both stall

  • Tried responseModalities: [“TEXT”, “IMAGE”] and [“IMAGE”] — both stall

  • Tried with and without image_config / reference images — both stall

  • Tried gemini-3-pro-image-preview (shown in the docs as the batch example model) — also stalls, doesn’t even leave JOB_STATE_PENDING

  • Switched back to gemini-2.5-flash-image with the exact same JSONL — completes in ~2 minutes

The models work fine for individual generateContent calls — it’s specifically batch mode that doesn’t work.

Minimal repro (Node.js):

  import { GoogleGenAI } from '@google/genai';

  const ai = new GoogleGenAI({ apiKey: 'YOUR_KEY' });



  // Write a simple JSONL file

  import { writeFileSync } from 'fs';

  writeFileSync('/tmp/test.jsonl',

    '{"key":"0","request":{"contents":[{"parts":[{"text":"A simple illustration of a

  cat"}]}],"generation_config":{"responseModalities":["TEXT","IMAGE"]}}}\n'

  );



  const uploaded = await ai.files.upload({ file: '/tmp/test.jsonl', config: { mimeType: 'jsonl' } });

// This works:

const good = await ai.batches.create({ model: 'gemini-2.5-flash-image', src: uploaded.name, config: { displayName: 'test' } });

// This stalls forever:

const bad = await ai.batches.create({ model: 'gemini-3.1-flash-image-preview', src: uploaded.name, config: { displayName: 'test' } });

Has anyone gotten batch image generation working with any model other than gemini-2.5-flash-image? The docs at

https://ai.google.dev/gemini-api/docs/batch-api show gemini-3-pro-image-preview in the example but that doesn’t work either in my testing.

For now I’m using gemini-2.5-flash-image for batch and gemini-3.1-flash-image-preview for single requests only. Would love to know if this is a known limitation or a bug.

I am facing same issue since yesterday. Batch enters RUNNING state with no change for hrs. Earlier, it used to return in reasonable time, but that has changed since Wed. Real time works fine but my use case is designed on the batch mode.

Is this an issue that someone from Gemini team looking at?

I think I’m facing the same issue when I send batch requests to gemini-3.1-pro-preview for normal text generation. The only difference is, my requests are stuck in PENDING state.

Anyone facing similar issues?