A lot of people complained about this before, maybe have a percentage of the chat unloaded while its out of frame to solve this? The models do have 1M context length after all, we just cant use it because of the lag.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Feedback: AI Studio UI Performance with Long Chat Histories | 1 | 76 | April 10, 2025 | |
Chat Lag and Crashes | 2 | 166 | March 15, 2025 | |
Time/Context Limit issue on Stream Realtime within Google AI Studio | 1 | 290 | March 25, 2025 | |
"Something went wrong." [ANSWER] Stream Realtime Multimodal Live API with Gemini 2.0 Constant Bug | 1 | 977 | January 27, 2025 | |
Google AI Studio is Computationally Intensive for Extended Chats | 5 | 700 | March 28, 2025 |