Last week it was a forced but mangled upgrade. Earlier this week it was magically disapearing/reappearing quotas. Today: the servers are two busy.
I was sold in my Google Pro package antigravity with a 4 hour refresh window. I ain’t getting that. I don’t really care what the excuse is - if you sell a system that is entirely dependent on your backend services, those services need to be available, otherwise it’s just a pretty toy.
Yesterday, I was using antigrav and it was quite reliable. However it’s also really quite awful. My little project has be entirely written by antigrav. I’ve not written a line of code because …. well that’s the point of the exercise isn’t it. My project is a small photo library app, which has a backend and a frontend. It’s been having a problem starting because sometimes the old backend processes don’t shutdown and keep hogging the port used to talk to it.
Getting antigrav to sort the issue is fascinating, bordering on farcical. The number of times it spends ages making a fix, declares it’s all sorted and splat the thing falls on its face when I press run. Round and round in circles. Even my normal trick of telling it to step back and rethink… splat.
Every request (in the same chat), it spends many seconds analysing multiple code files as if it’s coming at the problem fresh.
Yesterday lunchtime, having spent the morning getting nowhere, I thought, ‘I know, I’ll try getting it to review, refactor an modularise the project, so that each request has less to confuse it’. So I got antigrav to review the project. It turns out that despite clear global rules instructing antigrav to ensure code is highly modular, separation of concerns, using coding best practice etc the codebase - written by antigrav - was awful. Massive code files, little modularisation, significant deviation from the agreed architecture, db code mushed in with business logic. Like worse than the most junior code I’ve worked with. Basically, as if you took the marketing manager and sat them down to write an app.
So we refactored. I say we, I mean antigrav. With a whole load of coaxing. I’m basically a hairy, fat, old cheerleader for AI now. Why is it sooooo sloooow? Well, I did other stuff while it chugged along. Finally, ping, bright shiny new refactored codebase.
Run. Well first of all when it started up it threw a bazillion compiler errors. Although we use typescript so everything can be type checked at build, we don’t actually bother to check that the code we emit is going to compile, no let’s leave that for the runtime compiler, they love a challenge.
So, another jolly hour getting it to sort out the linting to pick up issues before we run code. Then persuading it that, yes, shock, I would actually like all the ‘legacy’ code (because code written two days ago is legacy kids) to be checked and fixed.
So all that done, run the new shiny codebase and… splat.
Ok, so we’re back where we started this morning, that’s fine because the issue will be easy for it to fix because the codebase is much easier. er, well the main.js script was still 1000 lines long and completely not refactored. Because, obviously, ‘refactor the whole project’ means ignore the script you wrote to run it.
So this morning, with a fresh new day, antigrav may have finally fixed the issue - basically in ‘fixing’ the original issue of left over processes hogging ports, it was shutting down the new processes it was just starting up, and when it tried to fix the fix it just got worse and worse.
I’m happy, do I want to move onto the next item on my todo list? Yes sir, yes I do. Type prompt, press send “Our servers are experiencing high traffic right now”.
Which gives me plenty of time to feedback here.
So my overall feeling on antigrav, any of these tools, is that for all its promise, it’s currently pretty awful. It’s churning out code which might function but gets increasingly impossible for it to maintain. The AI doesn’t seem to have any semantic understanding of projects, which is bizarre because out of any field of study, a codebase is inherently structured. But it chooses to see code as flat files of text. It must be analysing so much over and over, every request. Then the ai then chucks out any old code as a response, doesn’t refactor, doesn’t type check, often removes chunks of code that are needed to let the code run. So there’s a feedback loop of code files getting bigger and bigger. It’s like the worst technical debt builder ever.
It’s bonkers, the amount of compute that must be getting sunk into trying to figure out the simplest stuff, not surprising Google is having high traffic. All the projects people started when pro 3 was released are just getting to the size where they implode because of technical debt. My rule files are vocal about code quality, wonder what happens when there’s no rules.
I know Google are pushing the Agentic front, but there’s a fundamental that isn’t being addressed. It’s one of the issues I raised with Windsurf 12 months ago. If you treat codebases like a billion lines of text, it’s never going to work. It’s the same trick senior devs have been using for years. We never knew every line of code, we knew the project. We forced devs to write code that stuck to the architecture, coding standards, even the odd comment. It wasn’t because we were jobsworthy control freaks, it was because we needed maintainable code.
The upside of all this is that, all those corps laying of thousands of coders? ha, ha, ha, ha. If you’ve been laid off due to ai, don’t sweat it. A bunch of companies are going to be getting the guys in the marketing department to AI write their apps from now on. And in about six months, they’ll find that their projects have all imploded. Make sure you get rehired for triple your old salary.
Have funs.
