Gemini-2.0-flash-thinking-exp-01-21 and gemini-2.0-pro-exp-02-05 feedback

Hi, I just wanted to give some feedback on these two models.

gemini-2.0-flash-thinking-exp-01-21

My prompt entails a typescript coding project, and the token count is now around 400K. I have built this up mostly with the gemini-2.0-flash-thinking-exp-01-21 model. This model has done a great job of continuing to maintain context with the growing window, although it often has issues with the “thinking” control…sometimes doesn’t display, sometimes doesn’t exist. But overall it’s been very good.

gemini-2.0-pro-exp-02-05

This is the first model of all models I’ve tried (I’ve been using various models since early Advanced) that is truly giving bizarre results at a “low” context count. Previously with Pro 1.5, it would start to deteriorate up around the 1M token count (even with a 2M window), so I started creating new prompts off of “seed” prompts for continuity. But the new gemini-2.0-pro-exp-02-05 is already, in more than one session, providing just completely out-of-context answers. It will think that changes made a week or two ago were just made and opine on these things, changing its mind in the process. It just doesn’t handle the window well, especially in contrast to the 2.0 flash thinking (and the non-thinking version for that matter).

So this model is the first one that for me seems to have serious problems.

thanks

That’s it! Thanks so much for all your hard work. Let me know if you want more info or if there is some other more official channel for feedback on these models.