Hello.
Subject of the issue: The model tends to focus on the user’s last message, and this isn’t just a Gemini problem, it’s rather an issue with all major models.
Detailed explanation: The model concentrates so much on the user’s last message that it practically ignores the context of the entire conversation, to the point where it can get stuck in loops and contradict itself.
Example 1: When the model is helping with coding and we reject its development/help suggestions in a given case, sometimes after a few or even just one message, it suggests the same solutions again. Of course, the model will apologize if we point out the repetition, but that’s even more irritating because we don’t expect apologies but rather correct operation.
Example 2: Many people use the Gemini model to help with writing, for example, stories or novels. I often watch on YouTube how a writer or amateur writer uses AI models to improve a chapter or plot thread of their novel. In the case of creative work with the model, situations arise where the model is so focused on the last message that it creates contradictions with previous fragments that it created or received from the user.
Consequences: When the user receives the same message 2 or 3 times, or when the user sees contradictions with the overall context, it leads to irritation and a change of model. Personally, I succumb to this because when the model suggests the same thing to me a second time, I know that I will find the solution I’m looking for faster with the competition.
Solution: Add a “context focus” option, so that the model gives the same weight to previous information as to the user’s last message. Or even perhaps give more weight to previous information, because when a user is creating, for example, a chapter of a novel, if they have moved on to the next plot thread, it can be assumed that they have approved the previous one, therefore the model must all the more create content consistent with what has already been done and not contradict it in new messages.
My previous suggestions regarding the model sticking to its role were included in LearnLM. Therefore, I hope that someone will read this too. The competition does not provide access to a playground for free, Google could in this way provide better conditions for people who expect something more from AI than just a regular chat. Thus, attracting new users.