I realized that when I asked the Agent to generate a new plan when there was already a plan in my dialog history, the Agent would modify the existing old data and immediately enter the execution state. (without displaying the implementation_plan.md Open combinations in the dialog).
-
Why did I use it this way?
I needed the AI to build a new plan based on contextual knowledge, even though it might already have an old plan. Therefore, I chose to execute it within the same dialog rather than opening a new one. -
What’s the problem?
Steps without a plan are often abrupt and unreliable, with an almost 0% success rate in modification. You can’t even stop it because it’s not creating a plan in the policy; it’s executing an already permitted old plan. -
What did I try?
I first strongly requested a new plan, but it still modified the old one and executed it immediately. Only after deleting the old plan from the cache folder did it actually implement the new plan. -
How often does this happen?
These past few days, almost every day on my Gemini 3 Pro, I can say the Gemini 3 is extremely unstable.
I’ve been using Cursur lately and it feels much better than Hypergravity in this regard. Gemini’s quality is indeed questionable; it doesn’t seem to be the kind of LLM that can handle everything.