From the official website, it mentions the output character limit for training dataset as 5000 only for Gemini Flash 1.5 model. But I am unable to tune even with Gemini Pro 1.0 model. Important to note that I was able to do it a few months back, but it doesn’t work now. It feels like a backward step and I need to get this done asap! Any thoughts on this?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Understanding the limit of fine-tuned Gemini Flash | 1 | 180 | October 21, 2024 | |
Output tokens limit for the finetuned gemini flash 1.5 | 12 | 1130 | October 12, 2024 | |
Fine tuning for Gemini 1.5? | 2 | 161 | June 17, 2024 | |
Issue with Fine-Tuning `gemini-1.5-flash-001-tuning` Model | 5 | 312 | August 19, 2024 | |
Using Gemini API Key to tune Gemini-1.5-Flash-001-tuning model, stuck for hours? | 2 | 74 | October 28, 2024 |