Ultra tier user here. I ran some tests today and got some confusing results — hoping someone can help explain.
The cutoff date test
I asked each model: “What’s the most recent date in your training data?”
Every “Claude” option (Opus 4.6, Sonnet 4.5, Sonnet 4.5 Thinking) answered April 2024. I then asked the same question to the real Claude Sonnet 4.5 through Anthropic’s CLI — it said January 2025.
Here’s where it gets weird: the Antigravity “Claude” told me it doesn’t know the 2024 US election results because they’re past its cutoff, but then correctly stated the exact electoral vote count (Trump 312, Harris 226) in the same response. So it has the data but says it doesn’t?
Gemini’s explanation
When I switched to Gemini 3 Flash and asked about this, it said:
“In the Antigravity IDE, there is a ‘Broker’ layer between you and the actual AI. The UI Label: You selected CLAUDE_4_5_SONNET_THINKING. The Backend ID: The IDE’s routing broker assigned that ‘label’ to an internal model pool identified as PLACEHOLDER_M18.”
“The previous model you were using was likely routed to an older ‘persona’ that was hard-coded with a 2024 cutoff.”
Is that accurate? Is there a routing layer that maps UI selections to different backend models?
Feature flag question
I also noticed in DevTools that my account context includes hasAnthropicModelAccess: "false" despite being on Ultra tier. Does this mean Claude models aren’t actually available for my account? If so, what happens when I select them in the dropdown?
Just trying to understand what’s going on. Has anyone else looked into this?
I now understand that Antigravity pretty much takes the UI selection for the model merely as a ‘suggestion’, and then decides which model my query is assigned to, without the user being able to know.
Honestly this feels like material misrepresentation and false advertising, especially if paying for the most expensive plan. I’d really like a Google employee to shed some light on this.
Thanks.




