I have an active Google One AI Premium subscription and I am experiencing a critical service disruption within the Antigravity desktop client. Since several days ago, I have been completely locked out with a static “Weekly Baseline Quota” countdown of over 4 days (initially 168h).
This is clearly a backend tier-resolution regression (mismatch between billing status and API quota flags), as evidenced by the following:
-
Service Disparity: The exact same models (Gemini Pro 3.1) work perfectly fine and without any restrictions in the Gemini Web App and Google AI Studio. This suggests the Generative AI API is functional, but the Vertex AI / OAuth scope used by the desktop client is incorrectly falling back to Free Tier limits.
-
Static Countdown: The lockout timer is unresponsive to time passing or re-authentication, suggesting a “Hard Lock” on the UID level for this specific client ID.
-
Usage: This appeared suddenly without any unusually heavy token usage that would justify a weekly baseline exhaustion for a Pro user.
Troubleshooting already performed (to save your time):
-
Fully cleared application cache and local data directories (
Application Support/AppData). -
Revoked and re-authorized OAuth permissions for Antigravity via Google Account Security settings.
-
Forced sign-out of all remote sessions (cross-platform).
-
Re-installed the client.
The Antigravity client is being incorrectly assigned a ‘Free Tier’ weekly baseline instead of the advertised Pro rolling refresh.
Request for Action: Could a Google engineer please manually inspect and reset the quota flags/counters for my UID? I am happy to provide my account details and technical logs privately (via DM or secure channel) for further debugging.
Case ID (Google Support): [0-1320000040773]
Thanks in advance for your assistance in resolving this professional blocker.
Update – Visual Evidence (Screenshots Timeline)
I’m attaching a composite screenshot showing the progression of the issue across 5 timestamps spanning March 11–14, 2026. This provides concrete proof that this is a backend regression, not a client-side misconfiguration.
What the timeline shows:
-
11.03 @ 16:47 – Everything normal. Gemini 3.1 Pro refreshing in ~3h 41min (expected Pro rolling behavior). No warning icons.
-
11.03 @ 21:44 – Still normal. Gemini 3.1 Pro refreshing in ~4h 59min. Claude models refreshing in ~1 day 5h.
-
11.03 @ 23:01 –
Hard lock triggered (within ~77 minutes of the previous screenshot, with no unusual usage in between). Gemini 3.1 Pro suddenly jumps to 6 days 21 hours. Warning icons appear. Gemini 3 Flash is unaffected and continues normal cycling. -
13.03 @ 17:51 – Lock persists on Gemini 3.1 Pro (5 days 2h). Claude models briefly recovered and are refreshing normally (~2h 20min window), then…
-
14.03 @ 03:59 – Claude models are re-locked (5 days 18h). Gemini 3.1 Pro still locked (4 days 13h).
Key observations from the data:
-
The lock appeared instantaneously between two timestamps (21:44 → 23:01 on March 11) with no heavy usage between them — this is consistent with a server-side quota flag being incorrectly applied, not organic exhaustion.
-
Gemini 3 Flash is consistently unaffected — it continues its normal 4–5h refresh cycle throughout the entire period. This suggests the bug specifically targets higher-tier model quota buckets.
-
The Claude models showed a brief recovery window on March 13 afternoon, then were re-locked hours later — suggesting a backend process is periodically re-applying the incorrect quota flag.
-
The countdown does tick in real time, confirming this is a genuine 7-day hard lock, not a display glitch.
This behavior is fully consistent with a billing tier mismatch where the client’s API session is periodically re-evaluated by a backend quota enforcement job and incorrectly classified as Free Tier.

