Hello peps at Google,
Is anyone else hitting a hard wall with the AI Studio interface lagging out on long chats?
I’ve been using Gemini 3 Pro for some heavy coding sessions, and while the model itself is solid, the UI just can’t handle the history. Consistently around the 350k-400k token mark, the browser tab starts freezing, typing lags by full seconds, and eventually the whole page just crashes or stops accepting prompts.
It feels like a frontend memory leak or just the DOM getting too heavy because it tries to render the entire history at once. It’s pretty frustrating because the model supports 1M+ tokens, but I can’t actually use it because the interface dies halfway through.
I’m currently having to constantly nuke chats and restart, which defeats the point of the long context.
@LoganKilpatrick - is there any plan to add virtual scrolling or fix the rendering for long chats? Right now the “Pro” context window is basically inaccessible in the Studio UI because of this.