Hello, all,
My fine-tuning of the Gemini-1.5-Flash-001-tuning ran through perfectly with epoch of 20, batch size of 4, learning rate of 0.001. However, when I changed to epoch to 50, batch to 16, learning rate of 0.0002, the tuning has been stuck at 0% out of 100% for almost 1 day.
I am running the fine-tuning using the Gemini API key through a Python script. Nothing was changed in the script, only the parameters for epoch, batch, learning. Is there a solution to this or a way to figure out where it is getting stuck at? I am using the exact same API key I generated for the working tuning the first try. Do I need to generate a new API key?
Many thanks,
Michael