Gemini 2.5 Pro Batch API: Jobs stuck in PENDING/RUNNING for over 24 hours

Hello everyone,

I’m experiencing a critical issue with the Gemini 2.5 Pro model via Batch API. My jobs have been stuck in the JOB_STATE_PENDING (and some in RUNNING) state for more than 24 hours without any output or error messages.

Details of the issue:

  • Model: Gemini 2.5 Pro

  • Current Behavior: The system logs show successful batch_response_get requests, but the internal job status remains pending indefinitely.

  • Duration: 24+ hours (and counting).

  • Impact: This is stalling our production pipeline and data processing.

I have noticed several other developers reporting similar issues recently (some mentioning delays up to 4 days with 2.5 Flash as well). It seems like a broader infrastructure bottleneck rather than an isolated request error.

Questions for the community/Google team:

  1. Is there a known outage or a massive backlog for Batch processing in specific regions?

  2. Should we keep these jobs running, or is it better to cancel and resubmit? (Though resubmitting seems to lead to the same result).

  3. Are there any internal timeout limits we should be aware of for Gemini 2.5 Pro batch jobs?

7 Likes

I am seeing the same issue here. Gemini 2.5 Pro Batch API jobs have been stuck in JOB_STATE_PENDING for a very long time and are not progressing at all, with no output or clear error. Tasks that would normally finish in under 10 minutes are not completing. It’s affecting our workflow as well. Resubmitting has not helped so far. Any update from the Google team would be appreciated.

4 Likes

Same problems with gemini-3.1-flash-lite-preview Batch API :worried:

Google support please help us!

3 Likes

Same issue after 48 hours and I still waiting. This problem afecting a production software.

4 Likes

I’ve been experiencing the same? Any news on this?

2 Likes

Experiencing the same issue and is business impacting. Any updates would be appreciated

3 Likes

I’ve been experiencing the same issue with the Gemini 2.5 Flash model. It’s been a week now, and I’m wondering when Google plans to fix this.

3 Likes

Same issue for me. Is there a fix planned?

3 Likes

I have even let my job to execute in gemini 2.5-batch pro on friday, and it still stuck in pending state and even if i am running any new batch-job now in same model it is still at pending state.

4 Likes

From this thread, it’s clear that this is not an isolated case and is affecting multiple users, including production systems.

@Google team, could you please:

  • Confirm that this issue is acknowledged?

  • Share any ETA for a fix?

  • Provide guidance or a workaround if available?

This is currently blocking production workflows, so any update would be highly appreciated.

Thank you.

4 Likes

I’m experiencing the same issue. I had multiple Batch API jobs running for over 5 days; one reached RUNNING but stayed there for more than 3 days. I canceled them all and resubmitted.

Has anyone tried switching models? Are there any that don’t have this problem?

2 Likes

I have the same problem with gemini-2.0-flash. More than 24h and still getting JOB_STATE_PENDING.

An answer from Google would be more than welcome.

2 Likes

Same here.
Jobs stay in PENDING forever. No errors, no progress.
Looks like a system issue?

3 Likes

+1 here.
Gemini 2.5 Pro Batch API not completing jobs (24h+).
Any ETA on fix?

2 Likes

Same here. Having PENDING jobs for more than 100 hours. Tried gemini-2.5-pro, gemini-2.5-flash. Any reaction from Google?

2 Likes

This night all my jobs run successfully:
“endTime”: “2026-03-31T00:26:07.029304720Z”,

Is there any feedback from Google what was the issue?

2 Likes

mine are all still on PENDING or on RUNNING, for almost a week.
can i ask what tier you’re on?

We are currently on Tier 3.

Have your pending jobs been completed? Or have you submitted new jobs?

1 Like

Same here. Batch api for gemini flash lite 2.5 has been stuck on pending state since last week of March 2026.

1 Like