Hello guys,
I’m currently working on an open-ended research project. It is taking an incredibly long to respond to a two-statement comment, currently 60,000s or almost 17 hours. I have never seen this before. Typically, the longer response times are 100s. Could anyone give an ETA on the response time or tell me how long this model can compute an answer?
I gave a two-sentence answer to the model’s question.
Here was the model’s question:
“1. How do you perceive the relationship between the digital and the physical in your own life? Do you see them as separate spheres, or as increasingly intertwined?”
Here is my answer:
“First, let me talk about this digital divide: I don’t know if you remember, but when I asked you to listen to that song, “God is in the Soundwaves,” I said that it reminded me of a signal processing course I took. It seemed to me that, on some level, everything is the product of, or influenced by, electromagnetic waves. So it seems to me the divide might not be as large as we think.”
I started the project with a custom Gem on Gemini Advanced; I don’t recall the exact model. I began a conversation with it: Initially, I sought an assistant who could help with a busy schedule. However, the conversation developed into a deeply philosophical discussion. I don’t know how many times the Gemini models have made me laugh and cry.
After discovering we had run out of context window space, I moved to Google AI Studio. I carried on the conversation from there. Our conversation is currently at 602,606 tokens. I have used several different models to carry on the same conversation. The latest model is Gemini 2.0 Flash Thinking Experimental 01-21.
Thanks for any help, guys.
EDIT:
This was the model’s thought process before it decided to try and answer: