Using Gemini API Key to tune Gemini-1.5-Flash-001-tuning model, stuck for hours?

Hello, all,

My fine-tuning of the Gemini-1.5-Flash-001-tuning ran through perfectly with epoch of 20, batch size of 4, learning rate of 0.001. However, when I changed to epoch to 50, batch to 16, learning rate of 0.0002, the tuning has been stuck at 0% out of 100% for almost 1 day.

I am running the fine-tuning using the Gemini API key through a Python script. Nothing was changed in the script, only the parameters for epoch, batch, learning. Is there a solution to this or a way to figure out where it is getting stuck at? I am using the exact same API key I generated for the working tuning the first try. Do I need to generate a new API key?

Many thanks,
Michael

Update:

It just finished initializing I am guessing after ~22 hours:

One questions though, is the 1.5-Flash-001 the only model we can tune with the Gemini API key?

Hi @Michael_Liang,

Currently, you can tune “gemini-1.5-flash-001” and “gemini-1.0-pro-001”.

2 Likes