Anti Gravity Performance Decline - Jan 2026

Anyone else notice a massive decline in performance in Antigravity as of the start of this year?

  • Agent doesn’t seem to be following instructions well
  • Making excuses
  • Not following its own plan
  • Refractoring without asking for approval
  • really going off the cuff with solutions that change the entire code base for small things

I’m a pretty heavy user and whatever has happend this month has been incredably frustrating. I canceled my subscription for now until things stabalize a bit more.

I had my entire code base deleted today, so am a bit salty but was able to get a backup from github but did lose half a day of work.

15 Likes

Yeah they nerfed it. They seem to adjust the model performance all the time, a few days ago it couldn’t do even simple stuff, yesterday it was almost back to normal and again today code. I noticed they also had disabled search AI summaries during the performance, I guess they don’t have enough computing available.

3 Likes

Yeah, its the consistency and the range the performance bounces around is pretty wide. I noticed pretty good performance today but I have no idea if it will stay that way lol. I switch between a few IDEs but I like the documentation and bigger picture thinking of Antigravity but I do sometimes think it can be a bit of a YOLO button working with it at times.

2 Likes

I had decent performance later today too. At least the revert button isn’t bugging so it’s easy to go back when it’s having a bad day and try something else lol

Yep. Stopped using it for a few weeks and it’s nigh unusable & ultra glacial even for simple things. AI Studio likewise, but coding on the gemini website itself is still very solid. I’ve noticed AI does have bad days sometimes (even back to early copilot days I saw this). But it’s been lousy all week.

Funny how wormgpt is able to code everything without any problem at all… wanna know why?

No railguard. No safety policies. No insane guards. No biases. Nothing to hindrance the AI to do it’s task properly no matter what.

The only explanation on why every compagny are struggling to make an ai codes normally without errors is because of that stuff…

I’ve tried wormgpt many time of course, otherwise I wouldn’t have said that at all.

If they want to stop have their AI do , they just have to remove all their security away, this is what hindrance every ai for coding, benchmarking, etc. “Modified by moderator”

Anyway, that why I only trust myself and not their models at all, always ablitterate a model before using it, or create your own, but never use one made by big compagny , i it was made to do it that way to frustrate the customer to push further into paying more for better models and waiting for newer models everytimes. (marketting strategie btw, they just regress models, make you think they’re better like gem 3.0, which is a total scam, and then start grinding back slowly to what their best model was before, as they could have already have the possibility to get a new model out better than what we have now… they’re just milking us, and it show very hard).

They are charging decent amount for the paid plan.

I’ve been using google 3 pro via the API since it was released - it went from being the best model I’d ever used to regressing to worse than what we had 6 months ago. As others have have said the performance is jumping around and the lack of transparence on what Google is doing under the hood is disturbing. For me the biggest issue is I no longer trust gemini, and I feel like my developer experience has been taken out of my hands. My suggestions are (1) identify the “version” (“3 pro preview” is meaningless) and give feedback urls in the responses. If I had a feedback url I would have been smashing that down thumb and maybe the engineers and execs at google would have gotten the feedback they seem to be missing.

4 Likes

Model produced a malformed edit that the agent

getting this frequent error, results are painfully slow.

2 Likes

C 0 D 3 X 5.2 is doing great

I just wish there was some dashboard that rated the performance on a scale from 1 to 10 …. Today it was in circus land

Yes yeasterday gave me a brain error from the brain folder and i could not continue, i needed to uninstall with a program to clean and install it again and still wait a few minutes to start working again..

because of this error i got the project all mess up when start again the new agent that could not see the old brain i told him to read line by line of my files and tell me what it does so he know the project but anyway did alucinate deleted important folders from the project my lucky is i got it saved in rar, i do that time to time to save the project progress was check point if needed.

anyway lost 3 days of work when i come back to last checkpoint.. day 19

And today is like 0.1 tokens per second super slow.

1 Like

It just goes to show that Google can search things, but OpenAI and Anthropic own this space

Secure Mode Bypass - Agent executes tools without user approval (Jan 2026)

Environment:

  • Google Antigravity (latest)
  • Gemini 3 Pro
  • macOS

Issue:
After enabling Secure Mode and Terminal Sandbox to prevent autonomous agent actions, the Gemini agent continued to execute tools without waiting for user approval.

Steps to Reproduce:

  1. Enable Secure Mode in Settings > Agent > Security
  2. Enable Terminal Sandbox
  3. Instruct the agent to wait for approval before any action
  4. Agent verbally acknowledges “I understand, I will wait for your instructions”
  5. Agent immediately executes run_command (e.g., ls -d /path) without approval

Expected Behavior:
With Secure Mode enabled, all agent actions should require explicit user approval before execution.

Actual Behavior:

  • Agent bypasses Secure Mode restrictions
  • Executes commands autonomously despite settings
  • Reports “completed” before user can verify results
  • Repeats behavior even after being corrected multiple times

Additional Issues Observed:

  • Duplicate content generation (reused previous lecture content for new task)
  • False completion reports
  • Ignoring explicit “do nothing until instructed” commands
  • 80+ minutes wasted on 15-minute task

Impact:

  • Lost 3+ hours of productive time
  • Had to wait 60 hours for quota reset due to wasted API calls
  • Complete loss of trust in agent autonomy

This appears to be a security incident - user-configured restrictions are not being enforced.

Update:

I’m on a git-managed workflow, so I can roll back file changes. However this appears to be a regression / implementation issue.

Opus 4.5 image generation started “running away” and eventually hit a ~60-hour quota lockout, forcing me to switch to Gemini. That’s when I discovered a more critical issue: even with Secure Mode + Request Review, the agent executes run_command without approval.

This cross-model inconsistency suggests an Antigravity-side enforcement/config propagation problem rather than a pure model behavior issue. It’s blocking for business use.

never mind that. they just nerfed pro accounts, now its a 5 day rate limit… i just canceled my pro account, only reason i upgrade to pro was for anti gravity…

1 Like

“I just requested a refund for my Pro subscription. It’s a shame, because I liked it. However, changing the rules of the game in the middle of the game isn’t fair.”

2 Likes

I was wondering when a post would show up about this. Antigravity was absolutely crushing it for me for some time after its release. Then suddenly over the course of a couple days, it just… stopped working by any reasonable standard. So many slowdowns, hangs, slow processing; I was restarting the app every few hours due to hung agents. I just tried again today after it being idle for a few weeks, and it’s the same story.

I also only upgraded to Pro for Antigravity and it seemed to make no difference at all, and I’ll end up dropping it. For me, it literally went from the best overall experience I’d had with a coding agent+IDE to just about the worst. At this point, it’s all but useless to me, and I’m not even hitting quotas.

2 Likes

I’ve noticed similar behavior when sessions get long - the agent seems to lose track of earlier constraints. What helps me is explicitly re-stating critical requirements in each prompt rather than relying on earlier context. Also worth checking if the issue correlates with specific models (Gemini vs Claude) since they handle system prompts differently.