I’m working on a project in Google AI Studio that really needs more than 1 million tokens in a single chat. Splitting the session breaks the context badly.
Is there any way I could get access to a higher token limit — maybe up to 10 million — even just once or under special conditions?
Totally understand if not, but I’d really appreciate any help or alternatives.
at this time, the model with maximum input tokens is Gemini 1.5pro with 2M tokens as input see if that model serves your purpose.
the other possible work-around that could work is to go for a RAG based approach by storing the infromation in a vector store [or database] and retrieve the relevant information the database as required by the user query.