Dear Team Gemini,
Hope you are doing well.
My name is Advaita, and I am a research professional at a renowned research institution. We are working on a project using historical text records spanning 1950 to 2023 and plan to transform this corpus into a structured, research-ready format by extracting key variables and generating standardized summaries. After extensive benchmarking of available models, we have selected Gemini 3.0 Flash Preview as the best fit for our needs, given its accuracy and efficiency at scale.
We are now ready to begin processing the full dataset and are writing to request a custom rate limit increase to support this work. Our account is currently at Tier 1 in Google AI Studio, which presents a constraint for the volume of processing required. Our total estimated token usage is approximately 6,300M tokens. To manage this responsibly and respect processing capacity, we have structured the work into three sequential waves. The first and largest wave requires approximately 3,500M tokens. The second and third waves will be smaller and would require 1600M and 1200M tokens, respectively.
Our primary bottleneck is the enqueued token limit, which currently prevents us from submitting multiple batches simultaneously. We are fully prepared to distribute each wave across several days; a limit increase to 1,000M enqueued tokens (up from the current limit of 3M) would make this feasible.
We have also submitted a request through the standard online form multiple times. However, there has been no response or communication on the form, and therefore, we were hoping for a quick review of our request. It would be great if the rate limit increase could be processed in time to meet our upcoming publishing deadlines.
We would be grateful for any support the Gemini team can offer, and we are happy to provide additional documentation, usage projections, or context about the project if helpful. Thanks!