Antigravity GEMINI models are completely useless in the past 1-2 weeks

Okay so, i am using Antigravity since its release, and I was happy with it in the past. It had lot of tool calling issues, App froze a dozen times, but since last update, app freeze, tool errors mostly gone, BUT! Gemini model(3pro) are completely “crazy”. It doesn’t follow global rules even the simplest ones, doesn’t follow instructions no matter how precise they are, I feel like it must make things differently then I asked to do, I feel like its tryna be more “clever” then me. Its going after his “brain”, no matter what I do. Opus 4.5 (4.6today) is good, I can see in his thinks, that he read, and understood my global rules, uses them. It follows my dev promps good, as well. So yeah, gemini model(3pro), is completely went crazy, like its are trying to upset me on porpuse by acting idiotic, doing things I specifically asked NOT to. Today is the day, that I don’t even try to use gemini anymore, if I run out of Claude I just stop using Antigravity. Anyone having the same problem?

6 Likes

Lately, Gemini 3 Pro with Antigravity has become unstable, re-executing commands multiple times, looping, consuming tokens and usage, and ultimately making you wait several days to reuse it, even if you are on the AI Pro plan. Unfortunately, in its current state, it’s a beautiful toy but lacks stability. Antigravity doesn’t even offer the possibility of having additional keys or using Opus with a key, etc. I’m seriously contemplating my future with Antogravity, and I would be disappointed to switch. We shall see.

2 Likes

Thank you for the report. You mentioned that Claude (Opus) follows the rules, but Gemini seems to ignore them.

To help us troubleshoot your issue, could you please provide the following details?

  • The Global Rule/Instruction: Could you paste the specific rule or instruction Gemini is ignoring?

  • A Specific Example: A screenshot or a copy-paste of a prompt where it acted “clever” or ignored you, and what it did instead.

  • Context Length: Does this happen immediately in a new chat or only after a long conversation?

1 Like

when the thread is long or new i happens.. even when we give precise command not to mess other wrking flow it goes rouge n removes makes a mess tokens gone to fix or reverrt back we dont have any more token.. now claude model is goin 5days time out.. stuck for days when u ppl are gonna fix.. many times reported as bug !! 3 months paid but still stuck couldnt complete the project u ppl wnt say whts the daily limit how mch it can handle for pro plan !! its very frustated using ur app and paying !!

1 Like

I absolutely agree. It fails at the most basic tasks like creating a sorting by time.
It´s crazy that even after 5 iterations it still fails.
It has become useless.

1 Like

Yep, this has been my standard experience with AntiGravity, for about the last 2 months. It was Great when it first dropped. But now…. Gemini once even went in and NUKED my commits, stopping a roll back. Fortunately I had USB back up only lost the day. It also tells me at EVERY turn to switch my call to Gemini to 1.5. Gemini 1.5 has been decommissioned since September or October 25. It has totally become counter productive. And I have for the most part switched back to good ole VsCode with Codex. It’s a shame I want to be Team Google / Gemini. But I cannot work with an angry 5 year old powered by chaos.

Good luck everybody else. Hope you get better results.

Such a sudden drop in quality – maybe it’s a good sign that a new version is coming. I wonder if it’s intentional, or just a subjective feeling.

Since 3.1 launched, its not even working most of the times. I get agent terminated due error, or just generating loading… and nothing happens. Can’t even test improvments regarding my original problems…. :smiley: google what are you doing?! meanwhile Opus gets agent terminated errors sometimes, but at least its good, usable, fast. I don’t know what is going on for real, biggest tech company and continously regressing…

I was having issues with rules as well. A pattern that gets consistent and reliable results for me is refreshing an agent after about 2-3 tasks are completed. I’ve creating a handoff.md workflow that has the outgoing agent append to the memory mcp server a summary of the work they completed along with troubleshooting and other contextual artifacts. Then I have a boot-sequence.md workflow that has an agent:

1.  **Load Constraints**: Use your file tools to read all files in `.agent/rules/` and `.agent/workflows/`. You must strictly adhere to these workspace rules and SOPs.
2.  **Synchronize Memory**: Use the `read_graph` tool from the memory MCP to review the permanent architectural entities and relations established for this project so you do not hallucinate the system design.

The agent reliably loads the rules and even lists out the rules along with its interpretation that you can iterate on.

Note: I also found that when you @Rules , make sure your rule is listed. Some of my rules were not formatted correctly. If using markdown in your rules make sure you have a new line after any line that uses # markdown headers else the rule won’t be visible by the agent. They should really add a rule formatting blurb in the documentation.