Gemini 1206 ... disappointed... what's wrong?

I have been using Gemini 1206 for a few days now, primarily for retrieving highly detailed and professional knowledge (medical information) and drafting articles. Until yesterday, I was deeply impressed by the remarkable performance of 1206.

Depending on the prompts, when instructed to provide detailed and professional information, it generated exceptionally long, specific, and expert-level reports, outperforming both ChatGPT and Claude in terms of both quantity and quality.

However, starting today, I’ve noticed a significant decline in quality. While it seems to maintain a similar length, the level of detail, depth, and writing quality feels noticeably inferior compared to what I experienced before.

I’m beginning to suspect that this might be due to the recent integration of Gemini 1206 into the premium Gemini Advanced model, potentially leading to a division of resources.

Have others experienced this as well? Particularly for generating long-form text, have you noticed any degradation in performance?

1 Like

Somehow, it thinks the date today is 20/06/2024 and there is no way to convince it’s wrong…

Welcome to the forum.
For any LLM, time freezes when it’s training stops (on the training cutoff date). It is perpetually in the same instant in time, until it gets retired. You can inform the model what the current time is, and you absolutely should, when it makes a difference. Sometimes, even Google sample code forgets this premise - this cookbook Google Colab asks the model the ages of some popular actors, and the answer would have been correct at the point in time when the cookbook was first published, but the model gets the ages way wrong now (the actors have moved forward in time, and the model hasn’t).