Hello all,
I’ve been working with Gemini language models lately, and I’m running into some challenges when the input text exceeds the model’s context window. It seems like important details get cut off or forgotten.I wanted to ask:
- How do you usually handle inputs that are longer than the context limit?
- Are there recommended approaches to chunk or summarize data before sending it to the model?
- Have there been any updates in 2025 addressing these issues?
Any insights or tips from your experience would be really helpful! Thanks!
Hi @xyzapk ,
You can try using Retrieval-Augmented Generation (RAG) with Gemini models.
Also please go through :Long context | Gemini API | Google AI for Developers
1 Like