Why did you reduce the context window for Gemini 2.5 pro to 1.04 million tokens? How can the latest LLM take Real World data inputs if you do so in the upcoming models? Did you find an efficient way to convert large datasets into small tokens? Please, clarify my doubt.
Hi,
Welcome to the forum.
Maybe, just maybe, it is experimental?
Cheers