Gemini 1.5 Flash fine tuning with Vertex AI

In anticipation of the Gemini 1.5 Flash capabilities, we developed a tuning pipeline following the supervised fine tuning instructions in the VertexAI platform documentation. Almost all of this pipeline revolved around the vertexai library, and took advantage of additional cloud resources like GCS for storing training/validation datasets.

Unfortunately, it looks like gemini 1.5 tuning can only be completed through the google.generativeai library. This is a little disruptive for us, because the pipeline will need to be rewritten to use the generativeai library.

Is there any plan to bring Gemini 1.5 flash tuning into the vertex platform? This would save a lot of development effort.

1 Like

I can’t speak for Google, but I strongly suspect that, yes, gemini-1.5-flash will get a tuneable version on Vertex AI. My guess it will work much the same as the current preview of gemini-1.0-pro tuneable system.

Thank you for the response! I hope you’re correct, I think I prefer the vertex approach to supervised fine tuning.

If I want to do supervised fine-tuning of Gemini 1.5 Flash with labelled videos (to recognize actions from video) how would i do it using Colab ? I have been searching Google Gemini documentation but could not see any specific document guide.

At this time, fine tuning of Gemini 1.5 Flash is strictly limited to text-to-text.

Thanks for the reponse. Follow-up question - Can GEMINI 1.5 PRO be fine-tuned with supervision with labelled videos using Colab (to recognize actions from video) ? I have been searching Google Gemini documentation but could not see any specific document guide.

Unfortunately, gemini 1.5 Pro cannot be fine-tuned.

Depending on the labelling required and the size of the video, you may be able to get adequate results by passing your identification instructions into the prompt.

Does anyone here know how to connect the Gemini API to work in Make.com the way it works in Zapier.com free.