Models affected: Claude Opus 4.6 Thinking, Claude Sonnet 4.6
OS: Windows
Description
Claude models fail mid-conversation with HTTP 400 after
multiple turns. The session runs normally at first,
then crashes after the model has been working for some time.
130 AI credits were consumed before the crash, with no
useful output returned.
This is NOT a failure on the first message — the model
processes for a while before the error occurs. This suggests
Antigravity’s message history construction breaks after
several turns, placing an assistant message as the final
turn in the Vertex AI request.
Error 1 — Claude Opus 4.6 Thinking
Trajectory ID: b4c053bc-36e0-422d-8efe-00ec5de78d8d
Request ID: req_vrtx_011CZXKnKiehDbvJFSvLvBQU
Message: “The final block in an assistant message
cannot be thinking.”
Error 2 — Claude Sonnet 4.6
Trajectory ID: ed8ed5fb-023b-4a3b-a88b-420cfc35f427
Request ID: req_vrtx_011CZXLt9ARpNTD2oSzUXpdC
Message: “This model does not support assistant message
prefill. The conversation must end with a user message.”
Steps to reproduce
Open Antigravity
Create a new chat
Select Claude Opus 4.6 Thinking or Claude Sonnet 4.6
Send a long, complex prompt (multi-section project brief)
Model begins working and consumes credits
After several minutes, receives HTTP 400
Root cause (suspected)
After multiple reasoning turns or a long thinking block,
Antigravity appends an incomplete assistant message to the
conversation history and sends it as the final turn to
Vertex AI. Vertex AI’s Claude integration does not allow
this — the final turn must always be a user message.
Impact
130 AI credits consumed with no output delivered
Claude models are unreliable for any long-running task
No way to resume or recover the lost session
Request
Fix message history construction to ensure the final
turn sent to Vertex AI is always a user message
Implement credit refund or session recovery for
crashes caused by this bug — users should not lose
credits for Antigravity’s own errors
I hope this helps in diagnosing and mitigating the issue.
I encountered a similar problem while sending long, multi-section prompts (in my case, for designing and implementing 27 microservices). The system intermittently failed with errors such as “model does not support message,” especially after extended processing.
From my observation, this appears to be related to conversation state management and payload size/structure rather than a model limitation.
Workarounds that proved effective:
Reset conversation state:
Cleared the entire previous conversation history for the affected session to avoid corrupted or oversized context.
Chunk large prompts:
Avoided sending a single large prompt. Instead, decomposed the request into smaller, logically separated phases (e.g., architecture → services → APIs → deployment).
Maintain valid turn structure:
After long-running responses, explicitly sent a short user message (e.g., “continue”) to ensure the conversation ends with a valid user turn before the next request.
If the issue persists:
Start new chat
Reuse summarized context (not full history)
These steps significantly improved stability for long-running or complex workflows and helped avoid errors related to message validation.
I am a Google AI Ultra subscriber and I am experiencing the exact same HTTP 400 bug described in this thread.
Trajectory ID: 340fac54-8693-43f3-a190-a4ee0…
Error: “This model does not support assistant message prefill. The conversation must end with a user message.”
Model: Claude Sonnet 4.6 (Thinking)
This is unacceptable. I am paying a premium subscription fee for Google AI Ultra, and Antigravity is consuming my AI Credits mid-task — then crashing with an error that is entirely caused by Antigravity’s own broken message history construction. The model never gets to finish. The credits are gone. There is no recovery.
This is not a user error. This is not a network issue. This is a Google engineering failure to properly adapt Antigravity’s message formatting to comply with Claude’s API contract, which clearly states the conversation must end with a user message. Anthropic documented this requirement. Google ignored it — and paying users are the ones absorbing the cost.
I am demanding:
An immediate fix to Antigravity’s message history construction so the final turn sent to Vertex AI is always a user message.
A full refund of all AI Credits consumed in sessions that terminated due to this bug.
A public acknowledgment and timeline for resolution — not silence.
I chose Google AI Ultra specifically for Claude access. Right now I am paying for a broken product. Please escalate this.