Model: gemini-2.5-flash
Issue Type: Batch jobs not completing after 72+ hours
PROBLEM DESCRIPTION:
Seven (7) batch jobs submitted on December 29, 2025 remain in “JOB_STATE_PROCESSING”
status after 72+ hours. Expected completion time is 1-24 hours per documentation.
Job IDs:
[List your 7 job IDs here if you have them]
SYMPTOMS:
Jobs submitted successfully via Batch API
Files uploaded successfully (received file IDs)
Jobs show as “PROCESSING” in Google AI Studio
No progress or completion after 72+ hours
No error messages or failure status
ATTEMPTED RESOLUTIONS:
Verified API quota status (Tier 1 paid account)
Checked quota dashboard - all limits show as reset
Even if the JSON is valid, the schema expected by the Batch API is strict. The API parser looks for a specific top-level key (request) and fails immediately if it is missing. Batch API Input File
Please check the JSON structure in the File generation script.
These stuck jobs are likely zombie jobs. They may have failed internally without reporting the final status back to the API surface, or the specific queue for gemini-2.5-flash in your region is stalled. Please try to cancel the job and re-run. canceling-batch-job
I’m having a similar issue where everything was working fine but all of a sudden all my jobs (even when I submit something small) get stuck in processing and never go through. It’s been over 24 hours with this issue now.
We just updated some things on the batch service side to try and get more batches through the queue for 2.5 Flash-Lite. In general, there is a lot of demand for this model right now so it puts pressure on batch requests.