Hi everyone, quick Antigravity trust/transparency question.
I’m loving Antigravity so far (agent-first workflow + artifacts are awesome), and one of the big reasons I’m using it is the ability to choose between different models via the Model Selection dropdown while working (Gemini + third-party models).
The issue: occasionally, in a longer thread where I’m intentionally staying on one chosen model, the responses suddenly feel qualitatively different (tone/precision/“depth”) in a way that makes me wonder if the request might have been routed to a different model (or a lower tier) behind the scenes, possibly due to quota/rate limits, load, safety policies, or some internal routing logic. I fully understand quotas exist in preview and that limits can come into play; I’m not troubled about limitations, I’m mainly trying to understand what’s happening so I can trust my workflows and comparisons especially if/when a model doesn’t perform as well as expectations causing re-work or rolling back code.
Questions for the community
-
Does Antigravity ever fallback/switch models even if I selected a specific one (e.g., quota exhausted, model overload, latency optimization, etc.)?
-
If yes, where is that shown in the UI? Is there a per-message “model actually used” indicator anywhere? (e.g., I had Claude Opus 4.5 selected, but when querying the model which version I was working with, it said it was unsure and thought it was Sonnet).
-
Is there any request log / metadata / trace (even developer-facing) that shows:
-
selected model vs executed model
-
reason for fallback (quota, overload, etc.)
-
a response/request ID for debugging
-
-
Best practices: if I want strict experiments (A/B testing models), how do folks ensure the chosen model is the one that actually answered?
Feature request (if this isn’t already supported)
If Antigravity’s goal is trust, it would be hugely helpful to have explicit transparency per response:
-
“Answered by: gemini-… / claude-… / etc.”
-
If there’s a fallback: banner + reason (“quota hit → fallback to X”)
-
Optional “lock model” toggle for people doing benchmarking/comparisons
Would love to hear if others have seen this, or if I’m missing an existing setting/indicator. Thanks!