Batch API is taking longer than 24h (gemini-3.1-pro)

I’ve been having this exact same issue for a few days now with both the Flash and 3.1 Pro models. I’ve tested a lot of different variations, but nothing seems to fix it.

For the last 2-3 weeks, been having the same issues as the people above - all my batches just sit in the queue forever and dont do anything. All the models doing the same thing….

Our teams are actively working on resolving this issue . You should start seeing batches getting completed today

I can confirm that something is happening, a sample one in the queue for 2 days suddenly started processing, and 20 batches straight after are now rolling too. Good start to the weekend, thanks!

The problem has been resolved for me as well.
Thank you Mustan and Google AI Team.

I can also confirm that. I will test it a bit more, but it seems to be working so far! Thank you, Mustan! Great responsiveness :+1:

I think the issue is back for gemini 2.5 flash - none of my batches got processed during the last 30 hours, they just expire. I’ve tried it on three GCP projects - same issue, so not project-specific.

We have noticed the queues for 2.5 getting jammed and the team is actively working on getting it resolved, we might have to fail running jobs to unblock. Will get back with an update

Hi Lucia, thank you very much, is there an ETA for fix?

It’s been 4 days and 23 hours since 2.5 flash model stopped delivering results in batch mode in my projects.

Needless to say how badly this affects businesses.

bumping on this as I am continuing to experience the same issue across multiple gemini models (over 48hrs with still pending).

Ben, you can try VertexAI - the issue seems to be specific to Studio API. But VertexAI API has different format of the queries (snake_case vs. CamelCase) and some other nuances that needs to be taken into account - drop-in replacement wont work (which is another ridiculous oversight from Google engineers)

Hi, using client.batches.create, with gemini 3.1 preview , the batch has been stuck for 24+ hours. Thank you

It’s been 9 days, 14 hours since last successful resolution of the batch for gemini 2.5 flash model.

I don’t know in what universe this is an acceptable SLA.

We’re paying customers, not free riders.

I’m experiencing the same issue. I had multiple Batch API jobs running for over 5 days; one reached RUNNING but stayed there for more than 3 days. I canceled them all and resubmitted.

Does anyone have any update?

I switched to Vertex AI - 2.5 flash works there in batch mode, but there are differences in implementation.

Yeah, problem seems to be back unfortunately - 2.5 Flash taking forever to process!

Unfortunately we did have more trouble and bugs to resolve, but the team is closely monitoring the forum & system for latency increases and will be quick to reply if jobs are queuing up again

Im also seeing this problem via the batch API on model “gemini-3-pro-image”. Been in the “BATCH_STATE_RUNNING” state for over 27hrs

@Jrod_MR I suspected that something wrong with 4K image generation, while 2K images work fine with Batch API. Are you generating 4K images?