🔴 [CRITICAL] 503 on Every Model, Every Account — Antigravity Dead on Arrival

[BUG] Agent Terminated Error on ALL Models — HTTP 503 — Fresh Accounts with 100% Quota Remaining

Description:
I am getting the following error on every single model available in Antigravity, right from the start — without sending even a single message:

“Agent terminated due to error. You can prompt the model to try again or start a new conversation if the error persists. See our troubleshooting guide for more help.”

Models affected:

  • Gemini 3.1 Pro (Low & High)
  • Claude Opus 4.6
  • Claude Sonnet 4.6
  • Every other available model

What I have already tried:

  • Signed out and signed in with a brand new Google account
  • Both accounts have 100% quota remaining — zero usage
  • Started multiple new conversations
  • Reloaded/re-ran onboarding
  • Disabled all extensions
  • Tried “continue” and “proceed” prompts in chat
  • Tried Restart Agent Service via Command Palette

None of these steps resolved the issue.

My conclusion:
Since the error appears instantly on fresh accounts with no usage, across every model, this is clearly a server-side issue (HTTP 503) and not related to quota, rate limits, or my local setup.

Request:
Please investigate the backend infrastructure. This appears to be the same 503 issue reported in other threads. Has there been any update on a fix or ETA?

Environment:

  • OS: Window 11
  • Antigravity version: 1.21.9
  • Region: India
3 Likes

same issues happen with me. I can’t use any available model. the error shows

agent executor error: model unreachable: UNAVAILABLE (code 503): No capacity available for model gemini-3-flash-agent on the server: UNAVAILABLE (code 503): No capacity available for model gemini-3-flash-agent on the server


1 Like

yeah, it was coming on all available models in antigravity, excluding GPT OSS 120B

what this image want to tell ? and how it relates to the current antigravity problem ?

oh its true it works for GPT OSS 120B. I never use this model before

Basically the same thing you are experiencing, and just I triggered too many times of retries since I think Ultra quota could handle these requests and got banned which I just using Antigravity for PowerShell script development and infrastructure automation.

While telling the truth and try to make them take this seriously since Ultra users is paying the bill and we should get what we paid.

1 Like

It’s been a whole day, I am getting the same issue. It was 100% a while back, now, without responding, it came down to 60% remaining quota. Can someone help me with this.

2 Likes

Everyone has this problem—just delete it, get your money back, and that’s it.

2 Likes

how to get money back :rofl:

Register if you also experience outtage: Google Antigravity Status. Check if Google Antigravity is down or having an outage. | StatusGator

Incredible that they cant shape their traffic better and prioritize paying subscribers. Could it be because every Google Home is getting Gemini now?

Hi everyone,

I’m dealing with a critical routing/capacity issue on Antigravity and wanted to see if anyone else is facing this or found a workaround.

I recently upgraded to the Google AI Ultra plan ($300) to use agentic workflows with Claude-sonnet-4-6 and Gemini-3.1-pro-high. However, the service is completely bricked for me. Every time I run a prompt, I get an immediate error:

HTTP 503 Service Unavailable "reason": "MODEL_CAPACITY_EXHAUSTED"

Here is the frustrating part: My secondary PRO account works perfectly fine on the exact same machine and setup. This tells me the issue isn’t local. It seems the nodes/servers dedicated to the Ultra tier are completely overloaded or there’s a serious auth token conflict happening in the backend for Ultra users.

I have already tried:

  • Revoking credentials via CLI (gcloud auth revoke)

  • Complete clean reinstall of Antigravity

  • Clearing local .gemini and .config files

Nothing works for the Ultra account. Paying $300 for a service that has 100% downtime while the cheaper tier works seamlessly is unacceptable for business operations.

Has anyone else on the Ultra tier experienced this auth/capacity bug this week? Did any specific CLI commands fix the token conflict for you, or are we just waiting on Google to scale their servers?