Hi,
In the last days (starting March 27 or 28), all jobs submitted to the Gemini batches api seams to be stuck, and they are in status pending. Is anyone else also experiencing the same issue?
Hi,
In the last days (starting March 27 or 28), all jobs submitted to the Gemini batches api seams to be stuck, and they are in status pending. Is anyone else also experiencing the same issue?
Yes! I wrote the following in another thread but repeating it here too to increase chances of getting a response from a dev:
Same here, since late March, when I started experimenting with the batch API, all my test batches are stuck in PENDING. I thought I was doing something wrong but apparently it’s not just me!
My tests all specified gemini-2.5-flash as the model, so apparently it doesn’t just happen with newer models.
Batch API promoted but broken — PENDING for 8+ hours, no processing
Google has been sending me promotional emails about the Batch API and its 50% cost reduction. Great value proposition — so I built my entire pipeline around it.
Reality: it doesn’t work.
My setup:
Model: gemini-2.5-flash-lite (also tested gemini-2.5-flash — same result)
Billing: Tier 1 Postpay, confirmed in AI Studio
Service tier header: X-Gemini-Service-Tier: standard
File upload: successful, status ACTIVE, 4.7MB JSONL
Batch creation: returns 200 OK, BATCH_STATE_PENDING
Direct generateContent calls: work perfectly, instant response
What happens:
2000-request batch submitted April 1st — still BATCH_STATE_PENDING after 8+ hours
pendingRequestCount: 2000, zero processed
updateTime equals createTime — never touched
A previous batch from March 29th was also never processed, auto-cancelled after 5.5 hours
Submitted a 3-request test batch with gemini-2.5-flash — same result, stuck on PENDING
What I don’t understand:
I receive emails from Google promoting Batch API as a production-ready cost optimization. I invest engineering time to build a dedicated batch pipeline. Then the service silently accepts my requests and never processes them. No error, no warning, no “batch API is currently degraded” — just silence.
The status page says “All Systems Operational” while GitHub issues #1482 and #2221 show this has been a recurring problem since October 2025.
If Batch API is “best effort” and gets starved by real-time traffic, that should be clearly communicated — not marketed as a reliable 50% discount.
This is exactly my experience as well. We’ve been running hundreds of thousands of successful requests in batches, and suddenly the Batch API just stops processing our requests for days, weeks even. We’re on Tier 2 and far from any rate limits. We’ve been using gemini-3-flash-preview and gemini-3.1-flash-lite-preview for different tasks, experiencing issues regardless of which.
The last time this occurred for us was around March 10. Some users connected it to the release of Pro 3.1.
Now this issue has arisen again. Is it related to the release of inference tiers? No idea. What I do know is:
No response is the terrible part. Just tell us you’re working on it - acknowledge the issue.
Hey folks, sorry about the issues and the delayed response here. Our team is actively working on fixing this. We’ll circle back with an update once this is resolved.
Thanks for the reply. I really hope this will be fixed soon, because it’s draining me financially at this point haha.
Hi
The issue is now resolved. If you see any jobs stuck for more than 24 hours do reach out
Hi Mustan,
Thanks for the updates. However, from a user perspective, the issue doesn’t feel completely resolved yet.
Having to wait up to 24 hours to know if a job is stuck is a significant degradation in performance. Previously, when the batch API was functioning normally, jobs would transition to RUNNING almost immediately after the file upload and typically finish within about an hour.
Is this 24-hour wait time the new expected SLA, or is the team still actively working on restoring the previous processing speed? The current delay severely impacts our batching processing workflows.
I’m experiencing the same problem. My batch jobs have been stuck in JOB_STATE_PENDING for over 24 hours.
Model: gemini-3.1-pro-preview
the issue still not resolved. I tried gemini-3-pro-image-preview and gemini-3.1-flash-image-preview and eventually got JOB_STATE_EXPIRED message after seeing JOB_STATE_RUNNING after a day
The problem seems to be persisting. I have cancelled a number of obs which have been pending for over 12 hours. resubmitted but the status is same.
Am i the only one facing this issue as of now
Same is the case here. Had created a job was pending for more than 24 hours and then created another job and its still stuck since last 12 to 14 hours.
Would appreciate some response
Same issues here; recently decided to switch to batch processing since the 24h window sounded acceptable. One small test job is now stuck for 26h and a larger one for about 15h. Some clarification or advice from the devs or ops team would really be appreciated. Model response quality and availability really took a hit in recent updates ![]()
Hi @shrutimehta,
It seems a similar issue is persisting.
Now I have jobs stuck in the RUNNING state for a very long time (around 50K requests using gemini-3.1-flash-lite-preview, more than 12h), and the rest are stuck in the PENDING state.
It seems other people are suffering from the same issue.
The status page doesn’t tell that there is an issue:
Please assist.
Same issue here, I have several jobs stuck in JOB_STATE_PENDING for 24h+ and even one that’s been pending for over 32 hours. Anyone got any news or update from Google? Is this still ongoing for everyone?
model: gemini-2.5-flash & gemini-2.5-flash-lite
Hey folks, sorry about this, we’re aware of the issue and are actively working on it.
Please can you send me your project numbers at alicevik@google.com which will help with debugging.
Hi all, this issue should be fixed now, there should be no more jobs stuck in the queue.
I’ll keep monitoring here for the next few days or so, if you run into any issues, feel free to raise here. You can also directly reach out to me on the email above.