It’s been over 7/8 months I have been working with Gemini to build an automated trading app.
It’s been a fantastic experience in the past but I noticed a critical flawed logic in my algo and now I have been trying to build a submodule which will enhance the algo.
It’s been over a month working on the sub module but still it’s not over as recently I noticed that Gemini’s performance has become perhaps worse.
From the beginning what I do is that when I start a session 1st I provide all my codes to analyze then I share my opinion/issue description, most of the time the probable solution etc and I ask for specific code changes required for the fix.
Now a days I have observed that may be after working say 1/2 hours it started hallucinating.
Despite an issue which we fixed still many times it refers to that old issue in a new debugging scenario.
Also it fails to understand the root cause of the issue. Proposes code fixes which does not makes any sense and if implemented either it break the complete code / create new issues. It feels like very frequently it completely lost the track of the codes / issue description. Due to this my progression of the code has become extremely slow and until I could complete the sub module I cannot run the algo with the critical bug as already I lost quite a significant amount of money due to the bug.
Is it only me who is facing such performance issue??
1 Like
You’re not alone. In the past this has happened when they make big changes, like new models. I have a feeling this is related to the downtime the 2.5 Pro model had earlier this week.
Hang in there, it’ll come round.
too real. 2.5 flash often outperforms 2.5 pro in many tasks somewhat. really made me question about my subscription.
There is nothing worse that g00gle ai lately.
It seems like they received a message from animal trump to make war, not technology!
So then, bye g00gle!