Frequent 503 Errors (Service Unavailable) across all models

Hi everyone,

I’m writing to report a significant spike in 503 (Service Unavailable) errors while using the Gemini API lately.

I’ve noticed a few specific patterns:

  • Across all models: This isn’t isolated to one specific version. Im trying Gemini 3 Flash Preview and Gemini 2.5 Flash.

  • Not a Rate Limit: To be clear, these are not 429 errors (Too Many Requests). My logs show 503 errors, indicating server-side unavailability rather than quota exhaustion.

  • App Disruption: These errors are becoming so frequent that they are completely interrupting my app’s functionality and user experience.

Current Situation: I am currently on the Free Tier. My original plan was to enable billing and transition to the “Pay-as-you-go” tier once my credits were exhausted. However, I am now hesitant to do so.

My concern: I’m worried that after enabling billing, I will continue to experience these 503 errors. Does the Paid Tier offer better stability or a different infrastructure priority that mitigates these “Service Unavailable” issues?

I’d love to hear if others are experiencing this or if there is any official word on stability improvements for paid users.

Best regards,

9 Likes

I’ve been having the same problem since yesterday. Yesterday, requests were working occasionally, but today, so far, no results. I hope this is a temporary issue. I also have a free plan. In theory, it should work flawlessly, since it’s provided to test the service. I think the issue is server load.

1 Like

Bonjour. Je suis exactement dans la même situation. La génération de texte fonctionne via l’API, mais la génération d’images ne fonctionne pas. Les utilisateurs payants rencontrent-t-ils ce problème ?

Hi everyone,

I’ve been experiencing persistent 503 Service Unavailable errors when calling the model gemini-3-pro-image-preview for the last 2–3 days.

At first the requests were occasionally working, but now all requests fail consistently.

Details:

  • Model: gemini-3-pro-image-preview

  • Error: 503 Service Unavailable

  • Behavior: Happens on every request, no successful responses

This is blocking development and testing. Since this is a preview model, I understand there may be limitations, but the model has been completely unavailable for several days.

Can someone from Google confirm:

  • Whether this model is currently at capacity or disabled?

  • If there is any ETA or recommended alternative?

Thanks.

1 Like

Have exactly the same issue with tier 1 (pay as you go). Seems that Google is clearly not ready to absorb traffic…

I am falling back to GPT for now, which is more reliable at this stage since my users noticed that my services are always shutting down at the same time of day. Gemini is a good product but with no capacity to scale - c’est dommage.

1 Like

Running into this issue as well – temporarily falling back to Claude :confused:

1 Like

I have been facing the same issue for the past three days.

1 Like

Yesterday more than 50% of my calls raised a 503 and today it’s the same. I am clearly loosing trust in the service…

I’ve been running into the same issue lately. Can a company this big really not handle a problem like this? It’s so disappointing.

1 Like

Is there a way to tag someone from the Google team or report this issue? It has been going on for far too long.

2 Likes

Did u find any solutions or reasons why this happend?
And also did u find any alternative.
I recently tried vertex ai api, but there is no gemini-3-pro-image-preview models

1 Like

Samething today… Very poor service from Google and no reaction… I am losing money and client… It’s so disappointing

Can Claude work with images like the Nano Banana Pro?

How is it getting even worse? Is anyone actually going to take care of this?

Hi everyone, just a quick update on what I’m seeing on my end regarding these stability issues:

1. Peak Error Window: I’ve identified a clear pattern where service degradation is at its worst between 12:00 and 16:00 (Madrid Time / CET). This seems to coincide with the East Coast (US) waking up and connecting, likely creating a global capacity bottleneck. Outside of these hours, the 503 errors are much less frequent.

2. Model-Specific Instability: The gemini-3-pro-image-preview model is currently the most unstable for me. It fails almost consistently during peak hours, likely due to the high GPU demand for image processing combined with its “preview” status.

3. The “Empty Response” Issue (Ghost Completions): I’m also seeing a concerning behavior where the API returns a 200 OK but with an empty content object, even though tokens are being consumed. Here is a snippet of a recent response from gemini-3-flash-preview:

{
  "candidates": [
    {
      "content": {},
      "finishReason": "STOP",
      "index": 0
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 4011,
    "totalTokenCount": 6114,
    "thoughtsTokenCount": 2103
  },
  "modelVersion": "gemini-3-flash-preview"
}

Key takeaway from this:

  • The model is reasoning (in this case, it spent 2,103 tokens in thoughtsTokenCount).

  • The finishReason is STOP, implying it thinks it finished correctly.

  • However, the actual output is empty.

  • Crucially: These tokens are being counted against the quota/billing despite no usable text being generated.

Is anyone else seeing this “Ghost Completion” where reasoning happens but no final output is delivered? It seems the internal overhead during peak hours might be causing the generation to cut off right before the final output stage.

I was fully prepared to enable billing and transition from the free tier to a paid plan to support my project. However, given this level of instability—especially the 503s and the “empty” successful responses—I’m finding it hard to commit. I’m currently forced to explore other LLM alternatives to ensure my application remains functional during peak hours.

I hope the team can address these capacity issues soon, as I’d much rather stay within the Gemini ecosystem.

2 Likes

If you are from subcontinent yes antigravity is probably restricting us

Exactly the same pattern - I am using Gemini-3-flash-preview from my end. All requests are getting a 503 (none pass) in the same time range for 4 days now.

Regarding the ghost response, I haven’t noticed it for now, but will have a look. If you are using AI Studio, just try to look at the requests directly within it. It automatically logs all the requests that went through the API.

Just adding myself to the pile. gemini-3-flash-preview 503 errors as of yesterday.

1 Like

I’ve been having the same issue for a while. Can’t Google engineers even fix a basic issue like this?

Yeah, that’s a serious issue too. Right now, I’m just relying on retries to fix it, and it usually goes through after a few attempts. But the 503 errors feel way worse because a simple retry often doesn’t cut it.