Per-conversation model selection: Let each chat window use its own model

Problem

When I change the AI model in Antigravity (e.g., from Gemini 3 Flash to Claude Opus 4.6), it applies globally to all open conversation windows.

I usually have 4-5 conversations open simultaneously, each for a different project. I need different models for different tasks:

  • Heavy architectural work → Claude Opus 4.6
  • Quick bug fixes → Gemini 3 Flash
  • General development → Gemini 3.1 Pro

But switching the model in one window changes it everywhere, forcing me to constantly switch back and forth.

Expected Behavior

Each conversation window should remember its own model selection independently.

Why This Matters

  • Different tasks benefit from different models (speed vs. depth)
  • Constantly switching wastes time and breaks workflow
  • Other tools (GitHub Copilot, Tabnine, JetBrains AI) already support per-chat model selection
  • Cursor AI had the exact same issue and is actively fixing it

Suggested Implementation

  • Add a model selector per conversation instead of a global setting
  • Or at minimum, allow each workspace/project to have its own default model

Environment: Antigravity standalone app, Windows, multiple conversations open simultaneously.

Thank you!

3 Likes

Yes, please add this in. I am trying to load balance quotas, I have 1 context setup for quick updates/fixes/documentation changes

Another is for handling one portion of my project code using opus 4.6, while another section is given to Gemini 3.1, I am constantly having to switch my model when I switch to each chat context and it is very tedious to do this, sometimes you forget and you burn your thinking tokens on a simple task like ‘Update the documentation and log the work’ or ‘Add a new instruction to HANDOFF.md’

Another issue is when you have parallel agents running, if you change the model in one window, it will shift the model in all of the windows. I am not sure if this changes the models operating in real-time while they’re still processing prompts.

If we could lock in our models for each chat context, this would solve the problem entirely, it would be a great change so +1 to seeing this implemented soon!