I have noticed that it takes an insanely long time to do a very simple task over the past 24 hours. For example i asked ai studio to implement a simple slider that adjusts the size of an image, and it took it 3000 seconds with 2 time outs before just replying that the quota was exceeded and failing to fulfill this task. This is ridiculous since after sitting down and doing it myself, it took me around 20 minutes, including the time to research the code and make all necessary changes to make it functional for my app.
yeah I noticed the same thing but I’ve been noticing it over the past week or so. if you look at the thinking patterns it’s thinking deeper than it used to. it looks like it’s going into every nook and cranny of my code to try and figure out a simple question. and I’ve also noticed that it’s not just 3.1 it’s also three flash that is now appearing to think deeper. my strategy which hasn’t worked so far is to give it specific things to do. but even given it a list of atomic tasks to do it’s still is thinking forever essentially eating up all the quota. funny enough though when I ask the 3.1 itself about it, and gave it a screenshot it was remarkably surprised that it spent 26 minutes trying to figure out a simple question.
but yeah something definitely has changed
It’s a mess. I do notice deeper thinking too, atleast ~20-80s of thinking time and then lots of time reading files, etc. This isn’t looking too good. Might end up moving to Lovable if AI Studio keeps being inconsistent with its stability.
Hi
Would it be possible to share more context around this to help us debug better
- Which models are you using
- Does this happen for all prompts or there specific prompts you have noticed, if you could share a prompt that would be great
Hey, I’m also experiencing the exact same issue.
-
I’m using the models available in Build, Gemini 3.1 Pro Preview, Gemini 3 Flash, and Gemini 3.1 Flash Lite.
-
This usually happens with all prompts, here is an example prompt that took ~1010 seconds (Model used: Gemini 3.1 Pro Preview)
“Build me a secure password-generator with different customization options. Have a professional, modern, liquid glass hero section and landing page.”
For me, all the models are in play. When I observe their thinking, it often feels like they wander around the code, yet they usually land in the right spot, even if I wouldn’t have approved of the route taken. 3.1 Pro digs deep, working hard to avoid mistakes, while Flash 3.1 does the same but doesn’t go as far as Pro. This becomes clear when troubleshooting errors—Flash sticks to surface-level issues, while Pro digs deeper and connects the dots.
What else I have found:
I cleaned up my system instructions, making them more coherent and less contradictory, which really helped reduce the meandering. I think when it encounters conflicting instructions, it bounces back and forth trying to resolve them. Now it still thinks more than before, but in a straight line. I feel like the think settings were adjusted, which required new prompting strategies. I don’t give Pro simple surface-level tasks anymore since that would waste quota. Those tasks go to Flash or Lite depending on complexity, and I only use Pro now when there’s a lot to consider.