Hello, everyone. I want to talk about a recently encountered issue with an internal error. I use Gemini 1.5 Pro for working with large contexts, and it has always been a step ahead of its competitors. Its capacity of 2 million tokens enables the execution of a wide variety of tasks, which was what initially attracted me to this neural network.
During my work with Gemini, there were occasional issues, such as blocking certain normal foreign words (e.g., 디자인이, 이유나, and others) or the meaningful context threshold (800,000 tokens, beyond which severe hallucinations would occur). However, using certain techniques, these problems could be circumvented, and Gemini’s advantages outweighed its shortcomings. Unfortunately, this happy time has come to an end.
One gloomy morning, I discovered a new update to AI Studio. Some experimental models had disappeared, Gemini Pro 1.5 002 was downgraded to just Gemini Pro 1.5, and other changes were made. As usual, I submitted my standard prompt (600,000 tokens of context) using Gemini Pro 1.5, only to receive the message: “An internal error has occurred.” The exact same prompt had worked consistently up until that day, but now it no longer does. I tried tweaking the parameters, but it didn’t help. I switched to Gemini Flash 1.5 and encountered the same result.
I thought to myself, “Well, it’s time to switch to the next generation.” So, I moved to Gemini Flash 2.0, and surprisingly, the request went through. However, with every second of output, my mood kept sinking lower and lower. Unfortunately, Gemini 2.0 delivered very poor results, with numerous hallucinations and a poorly structured response. According to metric measurements, Gemini 2.0 has lost its main feature: working with large contexts. (Metric: MRCR, I’ll attach an image below).
Gemini Pro 1.5 had a result of 82.6%, while Gemini Flash 2.0 scored only 69.2%.
I don’t know what happened or how it works, but I’m asking for this to be fixed. Gemini’s main advantage has been lost. ((((