Issue Summary
When using Claude Opus 4.5 in Antigravity IDE and asking the agent “What model are you?”, it consistently responds with “Claude Sonnet 4 (claude-sonnet-4-20250514)” instead of identifying as Opus.
Steps to Reproduce
- Open Antigravity IDE
- Select Claude Opus 4.5 (Thinking) from the model dropdown
- Ask the agent: “what claude model are you?”
- Agent responds: “Claude Sonnet 4 (claude-sonnet-4-20250514)”
Expected Behavior
The agent should correctly identify itself as Claude Opus 4.5 when that model is selected.
Actual Behavior
The agent consistently reports itself as Claude Sonnet 4, regardless of which Claude model is selected in the IDE.
Additional Context
Why I noticed this: I know exactly how Opus works and was getting confused because the responses weren’t behaving like Opus-level reasoning. That’s when I asked “what model are you?” to verify — and boom, it claimed Sonnet 4.
- Screenshot attached: Shows model dropdown set to “Claude Opus 4.5 (Thinking)” but agent response claims “Claude Sonnet 4”
- This appears to be a model identity mismatch at the proxy/API layer, not necessarily that the wrong model is being used
- Makes it impossible to verify which Claude model is actually processing requests
- Similar issues reported with Claude model routing in Antigravity (e.g., MCP-related failures, agent termination errors)
Questions
- Is Antigravity automatically switching from Opus to Sonnet based on usage limits (similar to Claude Code’s 20% Opus limit)?
- Is this a display bug where Opus is running but identifying as Sonnet due to proxy configuration?
- How can users verify which Claude model is actually processing their requests?
Environment
- Platform: Antigravity IDE
- Model Selected: Claude Opus 4.5 (Thinking)
- Model Reported: Claude Sonnet 4 (claude-sonnet-4-20250514)
- Date: January 30, 2026
Any clarification on whether this is a known issue or intended behavior would be appreciated. Thanks!
Update with Proof
Here’s the evidence - I asked Claude directly to verify:
I went to claude.ai to test this and confirm what’s happening in Antigravity IDE.
What I asked: “what claude model are you?”
First response: Claims to be Sonnet 4.5 (claude-sonnet-4-5-20250929)
When I asked about Opus specifically: It correctly states the Opus model string is ‘claude-opus-4-5-20251101’
- I’m not actually on Opus like I thought I was in AG, OR
- The interface is showing the wrong model
I verified this on claude.ai itself - so this is definitely a model identity/selection issue in how Antigravity IDE is handling Claude models.
Either way, there’s a serious model identity/selection issue happening here.
Can someone from Google please clarify what’s actually going on?
1 Like
Anthropic would have that data available in the system prompt for the model. Antigravity does not. Opus just knows its an Anthropic variant because of the training data. This is really common if for example you ask an open source model directly what model it is it will often claim its a variant of OpenAI/Anthropic/Google, and then if you ask via the deployed chat interface of the company it knows because system prompt.
That explanation would make sense if the model gave a generic answer like “I’m a Claude model.”
But in this case it reports a specific internal identifier (claude-sonnet-4-20250514). That suggests runtime metadata or system-level labeling, not just training-data self-identification.
So either Antigravity is injecting incorrect model metadata, or the backend is routing to Sonnet while the UI claims Opus. In either case, the mismatch is still real and worth clarifying.
I tested this further for comparison.
When Claude Sonnet 4.5 is selected in Antigravity and asked about its model identity, it correctly identifies itself as Claude Sonnet 4.5.
However, when Claude Opus 4.5 is selected and asked the same question, the responses are inconsistent. In multiple runs, it identifies itself as Claude Sonnet 4 and in some cases as Claude Sonnet 3.5, rather than Opus.
This suggests the issue is not a general limitation of models self-identifying without system prompts, since Sonnet 4.5 reports correctly in the same environment. Instead, the problem appears to be specific to how Opus 4.5 is being routed or labeled within Antigravity.
Hello @ihssmaheel,
Thanks for the report. This is expected behavior with LLMs.
Unless the specific version string is hard-coded into the System Instruction, models often guess their identity based on their training data. Since ‘Sonnet 4’ and ‘Opus’ likely share similar underlying training datasets or alignment fine-tuning, the model may confuse its own persona.
I just asked the same question in my Antigravity setup and got answer “I’m Antigravity.”
As long as the Project Settings or Chat Dropdown shows ‘Claude Opus,’ the Antigravity backend is strictly enforcing that selection for your API calls.
Hello,
Thanks for the clarification. I understand that LLMs may sometimes guess their identity when the version string is not explicitly exposed via system instructions.
However, I wanted to follow up with an updated observation.
As of now, when I directly ask the model what it is, it consistently identifies itself as Claude 3.5 Sonnet , explicitly stating:
- Provider: Anthropic
- Model family: Claude 3.5
- Variant: Sonnet
I’ve attached a screenshot of the response for reference.
This seems to conflict with the earlier explanation that the backend is strictly enforcing the “Claude Opus” selection as shown in the Project Settings or chat dropdown. Given that the model is now confidently and consistently self-reporting as Sonnet (rather than a generic “Antigravity” identity), this no longer appears to be simple persona confusion.
Could you please clarify:
- Whether any routing, fallback, or cost-optimization logic can substitute Sonnet when Opus is selected
- Or whether the model exposed in the UI may differ from the model actually serving responses in some cases
I just want to understand the actual model behavior for evaluation and trust reasons.
Thanks for your time and clarification.