GPT-OSS 120B model - Failed to send

The GPT-OSS 120B model is completely non-functional (stuck at “Failed to send”). The output console shows a validation error that the config.MaxTokenLimit exceeds the model’s capabilities. I have tried everything possible locally:

  • Reinstallation of the extension.

  • Cleaning files, deleting the .antigravity folder.

  • Checking on a different machine and system (same result).

  • Adding the maxTokenLimit parameter in settings.json (it had no effect).

Steps to Reproduce

  1. Select the GPT-OSS 120B model in the chat panel.

  2. Type anything (e.g., “test”) and try to send the message.

  3. The “Failed to send” error appears.

  4. The following log appears in the Output panel:

checkpoint config validation failed: config.MaxTokenLimit (128000) cannot exceed the planner model's context window limit (131072) minus the planner model's MaxOutputTokens (16384)

How does it work for you guys? Is this agent functioning or not? Could additional configuration be required? Any insights?

Hello!
I just had a similar experience myself (the “Failed to send” error appeared) and finished posting to the forum.

If you don’t mind, could you share the details of what happened at that time?
In my case… well, I can’t really say anything specific, it just felt like the conversation suddenly cut off.
I was chatting normally with antigravity when suddenly I couldn’t send any chat messages in any conversation. Like you, I tried various methods like reinstalling the app and disconnecting from MCP, but nothing worked.

However, the fact that you’ve also experienced this issue recently suggests it might be some kind of bug.
There might be others facing the same problem as us, even if their forum posts aren’t being sent.

I’m sorry, but I don’t have a solution for this issue myself at the moment.
We might need to wait a little longer for an official statement…(´・ω・`)

I’ll share any new information if I find out anything.
See you later.

Solution (How I resolved the issue)
Here it is—it was very simple.

I switched the conversation model I was using (GPT-OSS1208)
to ClaudeSonnet4.5, and the conversation resumed.

I saw in another user’s thread that some conversation models might have stopped working, so I tried it and it worked for me.

I hope this resolves your issue.

GPT-OSS 120B hasn’t been working for me from the beginning. Could it be some sort of silent ban due to gender, race or religion? Additionally, the Claude model was at 40%, and now I have to wait an entire week for it to unlock. This is a very unfair approach, lacking transparency and it violates customer rights.

Yes, I did see several people in the thread I looked at who shared the same distrust and disgust you’re feeling right now.
I’m not very knowledgeable about English, but from the official statement, I understood it to mean that the occurrence of the problem might vary depending on the plan you’re subscribed to? (There’s a difference between Pro and Ultra)

Here’s a screenshot from that time.

And since I’m not particularly knowledgeable about PC environments either, I’m reluctantly implementing this solution because I have to proceed with the work.

I’m also anxious about what kind of response will be made going forward,
and frankly, I’m not even sure if I’ve gathered accurate information about this issue.

My issue has been temporarily resolved,
but I’ll look into other possible solutions just a little more for the future and for your sake.

I’ll report back if anything else comes up.

1 Like

Such practices do not build trust, so I am looking for a solution to work entirely offline - yesterday I came across a very interesting piece of information https://www.youtube.com/watch?v=jWhnicSLdD4

Thank you for flagging this issue. I have observed the same behavior and have escalated the matter to the relevant engineering team. To help us with deeper investigation, please provide any supporting details, such as logs or screenshots, if available.

I can’t comment on this forum because of censorship. I reported a problem via AntiGravity. I have no illusions and I do not count on a positive solution to the problem.

@Abhijit_Pramanik I’ve got a genius idea to fix the rate limits – just make the model actually follow prompts and stop hallucinating. That’ll cut the load by 90%. Who do I bill for this strategic advice?