Gemini 3 Pro Image Preview returning persistent 503 errors despite enabled billing

We’ve been experiencing persistent failures with the Gemini 3 Pro Image Preview model over the past 24–48 hours.

Initially, requests were failing with 499 errors, and they’ve now escalated to consistent 503 (model overload). This is happening despite billing being enabled, usage well within documented limits, and operating as a Tier-1 user.

Details:

  • Model: Gemini 3 Pro Image Preview

  • Errors observed: 499 → 503

  • Region: South India

  • Billing: Enabled

  • Usage limits: Not exceeded

  • Impact: Production image generation workloads are timing out or failing entirely

From logs, the failures appear to be upstream model capacity or regional availability issues, not client-side errors.

Is this a known outage, regional capacity constraint, or degradation affecting Gemini 3 Pro Image Preview?
If so, is there an ETA, mitigation strategy, or recommended fallback model?

Any confirmation from the Google team would be appreciated, as this is currently blocking production use cases.

3 Likes

yeah similar issue being faced by me i tried to contact google team but also i got no response form them i think this problem is going to take a long time to fix we should start looking for alternative.

1 Like

This is extremely concerning for a production-ready API. The issue appears to be region-specific underperformance, and while we have temporarily wired a fallback model (SeedDream 4.5) to keep the site running, that is not an ideal or long-term solution.

When teams trust APIs, especially from large providers like Google that actively encourage production adoption of AI models, there is an expectation of baseline reliability and capacity transparency. It is not acceptable for production websites to experience sustained failures like this, particularly when billing is enabled and access is via a company account.

hey i tried seedDream4.5 but i didn’t find that thing too accurate can you share something about that like accuracy and clearity that pictures are being generated from that…cause i recently talked to support but i did not find any thing helpful from them , instead they were only promoting vertex ai.

SeedDream 4.5 is a product by BytePlus. I have tested it and found the quality to be quite good and currently the closest alternative to NBPro that I have tried. That said, NBPro is still on a different level, and I do not think there is a true equivalent in this space yet. That is precisely why we chose it for production.

If needed, you can try SeedDream by registering on their website. They provide some free credits, which makes it easy to wire up quickly and experiment.

In situations like this, when the model we rely on becomes unavailable, having a temporary fallback helps keep the website operational. It is not a replacement, but it has been useful as a stopgap until the primary service stabilizes.

Also getting 503 for the last 7h.

Hi @AIKIZI_Team

I’m sorry to hear about the performance issues you’ve been experiencing. I think this is related to the capacity problems mentioned in this thread:

Frequent 503 Errors (Service Unavailable) across all models

As Logan said there, we’re working hard to increase capacity & reduce 503’s.

I’d recommend it may be worth implementing an exponential backoff & retry approach.. this may help with a lot of failures.

Hi Jon, thanks for the update and confirmation that the team is working on capacity.

Just wanted to confirm that we already have exponential backoff and retry implemented on our end:

Our current retry configuration: - Image Generation: 5 attempts with exponential backoff (1s → 2s → 4s → 8s) + 10-30% jitter - Retrying on: 500, 502, 503, 504, and 524 errors

Despite this, we’re still seeing high failure rates at all times, with requests exhausting all retries before succeeding.

We understand this is a capacity issue on Gemini’s side and appreciate the transparency. A few questions that would help us plan:

  • Would increasing our retry count/delays further help?

We’re committed to Gemini 3 Pro Image Preview for production and happy to wait this out, just want to make sure we’re doing everything we can on our end while capacity scales up.

  • Thanks for keeping us updated.

Would increasing our retry count/delays further help?

Caveat: I haven’t researched the secret sauce of our the load balancing, but I suspect it would, yes. My intuition would be to increase the number of retries, for example adding “8 –> 16“, if this is ok for your user experience.

We’re committed to Gemini 3 Pro Image Preview for production and happy to wait this out, just want to make sure we’re doing everything we can on our end while capacity scales up.

This is great, and I really appreciate your patience. Be assured, there are ongoing conversations / activity on resolving this.

1 Like

+1 here. Can’t even build reliably with it like this, never mind do anything production real for a client.

1 Like

Thanks Jon, appreciate the guidance. We’ll bump our retry configuration to include the longer delays: Updated backoff: 1s → 2s → 4s → 8s → 16s (5 retries, ~31s total retry window)

Honestly, for image generation our users are fine waiting even a few minutes if needed — the UX can accommodate that. The real pain point is consistent failures that exhaust all retries and return nothing. A slow success is far better than a fast failure.

Agreed. The reliability right now makes it hard to justify using this for anything client-facing. A brief outage is one thing, but sustained failures over several days are concerning for a production-labeled API. Really hoping this improves soon.

1 Like

i have started feeling like i am trusting a fail system , like a production grade api throwing errors like it is a test deployment even my tests never failed like this , i am very concerned to it … that i think i need to find a permanent replacement of this.