Google Antigravity has a bug that burns 10x your tokens and nobody at Google will acknowledge it exists

So there’s a fun new bug that shipped with the last Antigravity update, which was 17 days ago, which is also apparently the last time anyone at Google checked their own forum.

Here’s what happens. Model hits its output limit mid-response. Normal thing that happens, you’d expect Antigravity to handle it gracefully. Instead it retries. Fails. Retries again. Fails again. Does this about 10 times. Then finally writes a compressed version of what it originally intended to say, except now it’s worse because the model never planned to write a compressed version and the wrapper is just forcing it to surrender.

So you get a worse output AND you burned 10x the tokens getting there.

If someone at Google was trying to cut compute costs with this behavior, congrats, you shipped the exact opposite. The retry loop is more expensive than just letting the model finish.

I’d love to hear a response to this. Any response. From anyone. A junior dev who accidentally stumbled into this thread. A community manager who’s never touched the codebase. A bot. Anything.

Because it’s been weeks since a single Google team member responded to any post in here. Not this one. Not the capacity error threads. Not the 503 threads. Not the “agent terminated for no reason” threads. Nothing.

The models are fine. And can actually produce decent output when something isn’t randomly killing it mid-sentence. The wrapper is the problem. Antigravity as a product is in a state that would get a junior dev fired if they shipped it at any company with actual standards.

Is there a status page? An internal ticket system I can file against? A Google employee I can @ somewhere? I’m genuinely asking because this forum has the energy of an abandoned building and I’m starting to think the lights are just on a timer.

Except they have been diminishing how Claude works and making it dumber, on top of giving like 90% less claude usage as they are supposed to.