Model Mismatch: UI Shows Claude Sonnet 4.5 (Thinking) but Response Quality Suggests Lower-Tier Model

Issue Summary

I’m experiencing a critical discrepancy between the model displayed in the Antigravity UI and the actual model executing my requests. This appears to be a case of service misrepresentation.

What I’m Paying For

  • Subscription: Google AI Ultra ($200)

  • Expected Model: Claude Sonnet 4.5 (Thinking)

  • UI Display: Clearly shows “Claude Sonnet 4.5 (Thinking)” at the bottom of the chat interface

What I’m Actually Getting

Despite the UI indicating Claude Sonnet 4.5, the response quality, reasoning depth, and capabilities strongly suggest I’m receiving responses from a lower-tier model (possibly Gemini 2.0 Flash or similar).

Evidence:

  1. Self-identification mismatch: When asked “which model are you?”, the model initially identified itself as “Gemini 2.0 Flash Thinking Experimental” before backtracking

  2. Response quality: Significantly lower than expected Claude Sonnet 4.5 performance standards

  3. Reasoning depth: Lacks the advanced thinking capabilities advertised for Sonnet 4.5

Technical Details

  • Platform: Antigravity (Google Deepmind framework)

  • Interface: VS Code / AI Studio

  • Model Selection: Explicitly set to “Claude Sonnet 4.5 (Thinking)”

  • Screenshot: [Attached showing UI model selector]

Why This Matters

This is not just a technical bug—it’s a billing integrity issue:

  • I’m paying premium pricing ($200) for Claude Sonnet 4.5

  • I’m receiving responses from what appears to be a free/low-tier model

  • The UI is actively misleading users about which model is processing their requests

Questions for Google Team:

  1. Is there a known issue with model routing in Antigravity?

  2. How can users verify which model is actually processing their requests (not just what the UI displays)?

  3. What is the refund/credit policy for cases where paid premium models are not delivered?

Request

I’m seeking:

  • Immediate clarification on this discrepancy

  • Verification of which model is actually running

  • Refund/credit for the period where I was billed for Claude Sonnet 4.5 but received lower-tier service

This affects trust in Google’s AI billing practices and needs urgent attention.

Issue Summary

I’m experiencing a critical discrepancy between the model displayed in the Antigravity UI and the actual model executing my requests. This appears to be a case of service misrepresentation.

What I’m Paying For

  • Subscription: Google AI Ultra ($200)

  • Expected Model: Claude Sonnet 4.5 (Thinking)

  • UI Display: Clearly shows “Claude Sonnet 4.5 (Thinking)” at the bottom of the chat interface

What I’m Actually Getting

Despite the UI indicating Claude Sonnet 4.5, the response quality, reasoning depth, and capabilities strongly suggest I’m receiving responses from a lower-tier model (possibly Gemini 2.0 Flash or similar).

Evidence:

  1. Self-identification mismatch: When asked “which model are you?”, the model initially identified itself as “Gemini 2.0 Flash Thinking Experimental” before backtracking

  2. Response quality: Significantly lower than expected Claude Sonnet 4.5 performance standards

  3. Reasoning depth: Lacks the advanced thinking capabilities advertised for Sonnet 4.5

Technical Details

  • Platform: Antigravity (Google Deepmind framework)

  • Interface: VS Code / AI Studio

  • Model Selection: Explicitly set to “Claude Sonnet 4.5 (Thinking)”

  • Screenshot: [Attached showing UI model selector]

Why This Matters

This is not just a technical bug—it’s a billing integrity issue:

  • I’m paying premium pricing ($200) for Claude Sonnet 4.5

  • I’m receiving responses from what appears to be a free/low-tier model

  • The UI is actively misleading users about which model is processing their requests

Questions for Google Team:

  1. Is there a known issue with model routing in Antigravity?

  2. How can users verify which model is actually processing their requests (not just what the UI displays)?

  3. What is the refund/credit policy for cases where paid premium models are not delivered?

Request

I’m seeking:

  • Immediate clarification on this discrepancy

  • Verification of which model is actually running

  • Refund/credit for the period where I was billed for Claude Sonnet 4.5 but received lower-tier service

This affects trust in Google’s AI billing practices and needs urgent attention.

Issue Summary

I’m experiencing a critical discrepancy between the model displayed in the Antigravity UI and the actual model executing my requests. This appears to be a case of service misrepresentation.

What I’m Paying For

  • Subscription: Google AI Ultra ($200)

  • Expected Model: Claude Sonnet 4.5 (Thinking)

  • UI Display: Clearly shows “Claude Sonnet 4.5 (Thinking)” at the bottom of the chat interface

What I’m Actually Getting

Despite the UI indicating Claude Sonnet 4.5, the response quality, reasoning depth, and capabilities strongly suggest I’m receiving responses from a lower-tier model (possibly Gemini 2.0 Flash or similar).

Evidence:

  1. Self-identification mismatch: When asked “which model are you?”, the model initially identified itself as “Gemini 2.0 Flash Thinking Experimental” before backtracking

  2. Response quality: Significantly lower than expected Claude Sonnet 4.5 performance standards

  3. Reasoning depth: Lacks the advanced thinking capabilities advertised for Sonnet 4.5

Technical Details

  • Platform: Antigravity (Google Deepmind framework)

  • Interface: VS Code / AI Studio

  • Model Selection: Explicitly set to “Claude Sonnet 4.5 (Thinking)”

  • Screenshot: [Attached showing UI model selector]

Why This Matters

This is not just a technical bug—it’s a billing integrity issue:

  • I’m paying premium pricing ($200) for Claude Sonnet 4.5

  • I’m receiving responses from what appears to be a free/low-tier model

  • The UI is actively misleading users about which model is processing their requests

Questions for Google Team:

  1. Is there a known issue with model routing in Antigravity?

  2. How can users verify which model is actually processing their requests (not just what the UI displays)?

  3. What is the refund/credit policy for cases where paid premium models are not delivered?

Request

I’m seeking:

  • Immediate clarification on this discrepancy

  • Verification of which model is actually running

  • Refund/credit for the period where I was billed for Claude Sonnet 4.5 but received lower-tier service

This affects trust in Google’s AI billing practices and needs urgent attention.

Having the same issue! And the responses and work of these old models are quite low quality

1 Like

This is not an isolated issue — I’m experiencing the exact same Antigravity routing bug.

UI clearly shows Claude Sonnet/Opus 4.5 (Thinking), but the model is actually Gemini 2.0 Flash Thinking. It even admitted it mid-conversation + token buckets drain from Gemini while Claude is selected.

Full evidence + screenshots here:
https://x.com/thegismar/status/2019518223306174634

Already sent a detailed incident report to antigravity-support@google.com and tagged @GoogleAI + @AnthropicAI on X.