I gave him a code file of 300,000 tokens to read in the beginning, and then my computer showed insufficient memory, and then I looked at the browser’s background occupancy has been 3 G, I’m sure that I only opened an AI window, I closed this window, the memory returned to normal, and then in the subsequent dialogue Gemini began to speak more and more slowly, and finally I used the code file in the I finally used a function in the code file to let him explain why I changed some things in this function after the program can not run, then he has been thinking, thinking a minute later prompted an internal error, I am using Gemini1.5Pro, Token:393,908 / 2,097,152, Temperature is 1.0, the other are the default options!
1 Like
Try Firefox, this fixed the issue with >200 K token length
Also firefox user here, still lags to a halt for me. Any idea what settings could be affecting this?
300k huge context and then you had to do analytics which overdid the browser…
its not like the ai runs on client side. if the developers had forseen it, they’d probably have some percentage of the chat unloaded to prevent this. the models do have 1million token context length, the ui must be the bottleneck if you’d ask me.