Gemini batch jobs for flash-3 and 3.1-pro are getting stuck with running or pending for more than 24 hrs since 9th March 2026
Can confirm I’m having the same issue
I’m also having the same issue.
Same. Since 3.1-pro was released, the API has been unusable.
Same issue, I wait 48 hrs and still JOB_PENDING
I am seeing something similar, but with a slightly different pattern on gemini-3.1-flash-lite-preview in Vertex AI batch inference.
In my case, the batch job is accepted successfully, but in the Vertex AI console it can sit in:
Running (0/0 done)0 succeeded, 0 failed
while no output is visible in GCS.
However, if I manually cancel the job in the Vertex AI console, the output then appears in GCS and appears to be complete. So it looks as though the useful inference work may already have been done, but the job is not reaching a proper terminal/finalized state on its own, and cancellation seems to force finalization/export.
A few details in case useful for comparison:
- Vertex AI batch inference
- Model:
gemini-3.1-flash-lite-preview - Location:
global - Input/output via Cloud Storage
- Python SDK:
from google import genai
For comparison, the same general code pattern behaved normally for me with gemini-3-flash-preview and gemini-2.5-flash-lite, where RUNNING appeared to mean actual processing and the jobs terminated cleanly when complete.
So my issue may be related, but it is not simply “long-running with no result”: in my case, manual cancellation seems to release the completed output.
Has anyone else seen this specific behaviour with gemini-3.1-flash-lite-preview?
This makes me suspect a batch finalisation/state issue rather than a pure processing delay.