Another thing I noticed inside of one of my chats that has this problem is the fact that sent images seem to be completly missing from the chat for some reason…
And the other weird thing I found is I have another chat that has over 400k tokens and uses 2.5 Pro, but it doesn’t have that content loading bug whereas the chat that has it has around 350k tokens and uses the same model…
I have the same issue as well. One way I can mitigate this is to alt+enter so that the inputs are inserted without running the model, then use the rerun button on the input.
But the project won’t be saved since, like others said, it’s “waiting for content to finish loading.”
from what i observe, the model kind of tends to wander off a little bit, which is not visible on ui, but if you wait a minute or so you will be able to input your prompts again. the problem with the higher model is that in the long run the more prompts you have and context there is it tends to take time. but that problem only is for the newer version above 2.0, it also is slow in extracting the amount of context when calculating the amount of context for an image, or just file in general. lets they will probably fix this in the newer version, i hope