Horrible Experience with Google AI Platform entirely

Using any model below 3.1 Pro for even a simple coding task can yield in disastrous outcomes. Using 3.1 Pro hits rate limits and user quotas. Antigravity IDE has major flaws. Gemini API called by my agent under development time out frequently. It is a nightmare to set up Google AI API keys vs Vertex AI API keys for various purposes, and doing a little mistake in the process can blow up your cloud bills.

Google Nanobanana and Veo were advertised using cherrypicked examples because in real life there are too many iterations needed to get something desirable.

This is an extremely horrible set of services provided by a company aspiring to be an AI leader. I have Ultra subscription because I used to believe that everyone starts bad, but improves. Somehow Google is not focused on improving their Google AI experience for existing users, just capturing larger audience to onboard to the platform. No point in capturing more audience if you are not able to keep your heavy payers from churning. World knows that it is easier to keep a paying customer than to get a new customer to pay. Google should really understand this.

Follow up example -

Even Gemini 3.1 Pro was successfully able to mess up my entire repo. It claimed (within Antigravity IDE) to run my entire test suite successfully (500+ tests). In reality (when I asked Claude to audit), it was found that over 200 tests were never run, and when Claude ran those, ~120 were found to be failing.

Upon asking Claude to debug (within Antigravity), I got the gned “Our servers are busy. Please retry” error. So I had to switch back to Gemini 3.1 Pro or wait 4 hours for being able to use Claude again. I was on a tight timeline so I used Gemini 3.1 Pro, only for it to f**k up the entire codbase again. Ultimately I had to wait for Claude Opus to be available within Antigravity to be able to clean up the mess finally.

Hi @sparshgupta8130 ,

Thank you for the feedback. We apologize for the bad experience and acknowledge that we are facing some issues. We are actively working to resolve the challenges you mentioned with rate limits, API timeouts, and setup complexity.

To help us investigate the hallucinations and bad coding outcomes you experienced with Gemini 3.1 Pro, could you please share a few prompt examples? This will help our team improve the model.

I’ve gotten notifications to do certain tasks to increase quota limits, then constant reminders to create usage caps or limits, each time I review these notifications it is because I have not used my subscription and immediately rate limited. I understand the demand some of the harnesses are putting on the system but at a base level please improve consistency and clarity of roll outs to end user.

You’re facing some issues? Well why not take down your whole service and actually fix it? These problems have been persistent now for the past 3 months.

EXACTLY!!! SOME ISSUES?!?!?!?!? The whole Google AI Studio, API and related services are JOKE! Not working 99% day or night.

You know I had to build my own front end to use to generate images, as the Google ai studio one just fails every other try. My front end works 95% of the time more. Why should a user of this service have to create their own tool to fix what should work on GAIS???!!!

I was very excited about getting into Vertex after chatting with Gemini about using Vertex in my business. Initial brainstorm convo blew me away - between leveraging AlloyDB / BigQuery to the agent registry, persistent agent memory, Datastream, Pub / Sub, Cloud Run, etc I was like wow Google built an amazing platform and literally thought of everything.

Then I started getting into specific planning, and I felt it falling apart. Gemini would quickly lose context. I had to keep it on track and continue to re-direct it. I decided to start with Agent Studio to create an agent and a sub agent to track my data migration from existing CRM to AlloyDB so we could have these two agents monitor our schema transition. This resulted in ~2 hours going in circles with Gemini to solve an issue related to Agent Studio loading in the global region, trying to publish in the US west region, and Claude needing to run on US East 5. I ended up ditching Gemini after it rate limited my conversation and was repeating the same instructions we had run through multiple times.

I could not successfully deploy an agent using agent studio, but was able to get one deployed using ADK with the help of Claude co-work. I invested a lot of money into a setup to run local models, and was ready to ditch $100k of hardware in favor of running our agent stack on Vertex, but if what I saw tonight was indicative the platform is essentially unusable. I sincerely hope Google gets this figured out.