Question: How can we verify which model Antigravity actually used for a given response?

Hi everyone, quick Antigravity trust/transparency question.

I’m loving Antigravity so far (agent-first workflow + artifacts are awesome), and one of the big reasons I’m using it is the ability to choose between different models via the Model Selection dropdown while working (Gemini + third-party models).

The issue: occasionally, in a longer thread where I’m intentionally staying on one chosen model, the responses suddenly feel qualitatively different (tone/precision/“depth”) in a way that makes me wonder if the request might have been routed to a different model (or a lower tier) behind the scenes, possibly due to quota/rate limits, load, safety policies, or some internal routing logic. I fully understand quotas exist in preview and that limits can come into play; I’m not troubled about limitations, I’m mainly trying to understand what’s happening so I can trust my workflows and comparisons especially if/when a model doesn’t perform as well as expectations causing re-work or rolling back code.

Questions for the community

  1. Does Antigravity ever fallback/switch models even if I selected a specific one (e.g., quota exhausted, model overload, latency optimization, etc.)?

  2. If yes, where is that shown in the UI? Is there a per-message “model actually used” indicator anywhere? (e.g., I had Claude Opus 4.5 selected, but when querying the model which version I was working with, it said it was unsure and thought it was Sonnet).

  3. Is there any request log / metadata / trace (even developer-facing) that shows:

    • selected model vs executed model

    • reason for fallback (quota, overload, etc.)

    • a response/request ID for debugging

  4. Best practices: if I want strict experiments (A/B testing models), how do folks ensure the chosen model is the one that actually answered?

Feature request (if this isn’t already supported)

If Antigravity’s goal is trust, it would be hugely helpful to have explicit transparency per response:

  • “Answered by: gemini-… / claude-… / etc.”

  • If there’s a fallback: banner + reason (“quota hit → fallback to X”)

  • Optional “lock model” toggle for people doing benchmarking/comparisons

Would love to hear if others have seen this, or if I’m missing an existing setting/indicator. Thanks!

Hello @Greg_Dogum, welcome to AI Forum!

Currently, Antigravity does not display a per-message metadata badge (e.g., ‘Generated by Claude Sonnet 4.5’) on individual responses. Your selection in the dropdown is sticky for the duration of that conversation thread. If you selected Claude 3.5 Sonnet at the start, the ‘Mission Control’ orchestrator is hard-coded to route your main reasoning prompts to that endpoint until you manually switch it.
It is important to note that sub-agents often use specialized, fixed models regardless of your dropdown choice. Browser agent uses a specialized Gemini 2.5 Pro UI checkpoint. Gemini 2.5 Flash is used in the background for checkpointing and context summarization.

1 Like

Thanks so much for that reply! I remember seeing the details in the documentation about the stickiness of the model in a thread, but now that you mention sub-agents that must be or explain exactly why sometimes a response or action seems out of place from the main agent. Much appreciated!

1 Like

Hi @Abhijit_Pramanik

What about the Claude model quota issue fix ? whats the status of the issue ?

1 Like

Hello @ihssmaheel,
Thank you for sharing your concern. Many other users reported the issue with Claude model quota. Here is one such thread. You could post your comment there for better visibility.