Can Gemini LLM Be Fine-Tuned Locally on Private Servers for Sensitive Data?

Hello, community!

We work with private data that must remain on our local servers due to strict data privacy regulations. We’re exploring the possibility of using Gemini for fine-tuning our models but need to ensure that all computations and data handling occur locally on our company’s hardware and HPC.

My specific questions are:

  1. Does Gemini support local fine-tuning without any external data transmission?
  2. Are there specific tools, configurations, or APIs provided by Gemini to enable such secure, private fine-tuning workflows?
  3. How does this compare to fine-tuning other models like Llama in terms of ease, performance, and system requirements for on-premise use?

Thanks in advance!

No.

You could use the Gemma models with a model runner, eg. ollama, vLLM, or llama.cpp, locally.
Remember, that high-performance GPUs are essential for own hardware usage.