According to the doc of fine-tuning Gemini, “The input limit of a tuned Gemini 1.5 Flash model is 40,000 characters”. I wonder what does “input limit” mean? is it the total context window (include all previous questions and Gemini’s previous answer) or only the latest query input that is subjected to this limit (the context window is still 1mil tokens)? Thanks
Clarification of Input Limit for Fine-tuned Gemini 1.5 Flash Models:
The input limit of 40,000 characters for fine-tuned Gemini 1.5 Flash models applies specifically to the data you send and receive within a single interaction. This means:
- Fine-tuning Datasets: Each training example (input prompt and desired response) must be less than or equal to 40,000 characters. Refer to the fine-tune tutorial
- Using the Tuned Model: Each request (input prompt) you send to the fine-tuned model must be less than or equal to 40,000 characters.
Important Note: This limit is not related to the overall context window of the base Gemini 1.5 Flash model, which remains at 1 million tokens. This means the model can still remember and reference information from previous interactions within this limit.
2 Likes