Some honest feedback after a few months on the Ultra tier

I’ve been using the Ultra for Business tier for a few months now, and honestly, I’m writing this feeling pretty exhausted. I understand Antigravity still officially has a “preview” label, but since it involves a paid enterprise subscription, I was really hoping for a more stable, production-ready experience.

I had high expectations when Antigravity first launched. Features like Artifacts are actually quite useful for tracking what the agent is planning. In theory, the IDE’s core Brain and Knowledge capabilities seem like they should be incredibly powerful. But in practice, I just haven’t felt their real-world utility yet. The agent still struggles to retain and effectively use the broader context of my projects.

But beyond the completeness of the features, this is fundamentally becoming an issue of trust.

The absolute biggest blow to that trust has been the unannounced usage limit reductions. Having our quotas silently nerfed with zero transparency, no release notes, and no heads-up makes it impossible to rely on this tool professionally.

Right behind that is the constant barrage of 503 errors. It’s incredibly frustrating that even on the highest-paying Ultra tier, there seems to be absolutely no priority routing or guaranteed capacity during peak hours. We’re paying premium enterprise prices just to get locked out of our own workflow.

This fading trust is only made worse when I look at the competition. I use Claude Code and Codex alongside this, and they are massively boosting productivity with native sub-agents handling parallel tasks. In Antigravity, I used to at least be able to manually simulate this parallel workflow using the Agent Manager. But lately, continuous stealth downgrades to the Agent Manager have made its usability even worse. Instead of advancing to catch up, it feels like it’s regressing.

Seeing basic, workflow-breaking bugs ignored for months just adds to the feeling that developer UX isn’t a priority. The auto-approve bug, for example, has been broken for so long that I actually had to write my own custom script just to bypass it and automate the approvals myself.

Then there are the technical frustrations that just wear you down over time. If a session goes on too long, the IDE slows down to a crawl, eventually freezing and irreversibly losing the entire chat history. Context management is another headache. Hidden system prompts consume a massive chunk of tokens upfront, and the context compression feels like a simple first-in, first-out queue that drops core project rules too easily. I end up spending way too much mental energy just micromanaging the context window. On top of that, recent updates seem focused on pushing superficial UI changes that bury essential features behind extra clicks, while bug reports go into a black hole with zero replies. Looking back, the only update I really appreciated recently was the Skill support.

I also have to mention the performance of Gemini within the IDE itself. We all see its high benchmark scores, and when I use Gemini directly in AI Studio or Stich, the outputs are actually quite reliable. But inside Antigravity, it feels like a completely different model. Unless I explicitly write “do nothing but…” in every single prompt, it just makes its own bizarre decisions and rushes to execute them before I can even review the plan. It honestly feels like dealing with an overly confident intern who just starts deleting important files, renaming directories, and restructuring the entire codebase without permission. Because of this erratic behavior, I’ve become extremely reluctant to use Gemini here, leaving me to rely almost entirely on Opus 4.6. This essentially turns a multi-model IDE into half a product for me.

I really want Antigravity to be a reliable core tool in my stack. But with the current lack of stability, transparency, and basic communication, I’m starting to think it might just be better to cancel our company’s Ultra subscription and use that budget to buy raw API tokens instead.

I’m still holding out a bit of hope that these core issues get ironed out because the potential is definitely there. I just hope the team realizes that developer trust is hard to rebuild once lost, and starts focusing heavily on transparency and stability moving forward.

I feel you - i was teetering on the edge of an ultra subscription and though i might try out codex before making up my mind and, yeah I have to bash at the $20 OpenAI plan all hours for 3-4 days straight to exhaust the weekly quote early.

I can also trust it alot more, it doesn’t try and drive the bus like Antigravity/Gemeni - it lets you maintain control of the project, feature set, tech stack - instead of coming back from a coffee and finding it has hijacked the project, broken it and commited 25 unstable releases on your behalf, it just does the few tasks you have asked for after first clarifying your intention - a world apart.

I was about to abandon ai support and just go back to bashing every line myself or just using it for brain storming discussions, but i have found that the GPT5.4 is capable of following my detailed instructions and doing minimal damage while it does.

So for me it wasnt just $$, Quota, and bad faith moves from google, - it was trust in the AI and its default prompt.

I couldn’t be bothered trying to proxy or wireshark it to see but its fairly clear the main Antigravity’s main system prompt includes a highly misguided statement like “Anticipate the users next steps and do it all for them - don’t stop till you have exhausted every feature idea you can jam into this project - go ahead and install any libraries or dependancies you want without asking and don’t worry about using the latest stable version of them or any security concerns - once installed - don’t actually use any of this peer reviewed + battle tested code when you could otherwise exhaust maximum user quota and frustration by reinventing all these wheels” - similar to github copilot.

I really dont understand why none of the ai assistants (codex,antiG,GithubCopilot etc) will give us the ability to manage the main system propt ourselves. GH Copilot tried to work in an instructions file but forgot to ensure it isnt summarised out of existance at first chance.

I feel like they are not taking into account that some people can manage coding tasks and do understand all the technologies and are looking for basic support to manage small tasks so we can be removed from these tasks and focus on the project - its like they are all aimed at vibe coding for the google age where you spew on a keyboard and google does the rest. Am i just looking in the wrong place - is there a good AI coding assistant platform rather than a AI Hijacking service?

</ Rant>