[Bug] Frequent “prompt is too long” (200k token limit) after longer agent sessions

Hi team,

I’m repeatedly hitting this error in Antigravity after longer chat sessions with the Agent. Below is the full error output exactly as shown:

Trajectory ID: 4bf9ad84-7cad-4a23-9f1a-842b09cc7430
Error: HTTP 400 Bad Request
Sherlog: 
TraceID: 0x27e3f737f026a8b4
Headers: {"Alt-Svc":["h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000"],"Content-Length":["282"],"Content-Type":["text/event-stream"],"Date":["Fri, 27 Feb 2026 19:36:50 GMT"],"Server":["ESF"],"Server-Timing":["gfet4t7; dur=6355"],"Vary":["Origin","X-Origin","Referer"],"X-Cloudaicompanion-Trace-Id":["27e3f737f026a8b4"],"X-Content-Type-Options":["nosniff"],"X-Frame-Options":["SAMEORIGIN"],"X-Xss-Protection":["0"]}

{
  "error": {
    "code": 400,
    "message": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"prompt is too long: 218556 tokens \\u003e 200000 maximum\"},\"request_id\":\"req_vrtx_011CYZ8uG45t8zKY64ia9Sam\"}",
    "status": "INVALID_ARGUMENT"
  }
}

The visible prompt I send is not that long. My assumption is that Antigravity accumulates artifacts, conversation history, tool outputs, and possibly file snapshots into the context window. Over time, this silently grows until it exceeds the 200k token limit.

Creating a new conversation does temporarily solve the issue, but it’s not a practical workaround. It resets artifacts and context, and I have to re-explain everything again, which disrupts workflow continuity.

Honest question: does Antigravity currently have any mechanism to automatically summarize or compress context when approaching the token limit? (Hey Antigravity, most of your competitors already have this feature, why don’t you?)

There’s a related feature request here that may be relevant:
https://discuss.ai.google.dev/t/feature-request-visual-alert-to-prevent-prompt-is-too-long-error-context-window-indicator/127286

A visual context window indicator or early warning system would help prevent this failure. Automatic summarization or context compression when nearing the limit would be even better.

Right now the error only appears after the request is sent, which interrupts the workflow and forces a reset.

Would appreciate clarification on how context growth is handled internally and whether improvements are planned.

Thanks,

Hello @aldodkris
Thank you for your feedback. We appreciate you taking the time to share your thoughts with us, and we’ll be filing a feature request.
To help us prioritize this request effectively, any additional details you can provide regarding the impact this feature would have would be very helpful.

1 Like