-
Further training is broken. Used baseModel instead of tunedModelSource.tunedModel.
-
CreateTunedModelRequest.tuned_model: text_input is too long. The maximum character count accepted is 40000.
CreateTunedModelRequest.tuned_model.tuning_task: output is too long. The maximum character count accepted is 5000.
It counts bytes, not characters. JSON is UTF8, not ASCII. And that’s not enough. For a 16k model(flash), 16k*4=64k characters are required. -
Add tunedModels.streamGenerateContent.
-
Incorrect counting of epochs in operation metadata.snapshots.
It’s actually counting tokens, which are pretty complex, and there is no easy way to make sure that all 64,000 characters fit within the context, which is probably why they have a max character count of 40,000 to stop all major errors regarding this.
You can check out more info on it here: Understand and count tokens | Gemini API | Google AI for Developers
I am not sure if there is a feature at the moment that allows you to be able to fine-tune a previously fine-tuned model. This might not be a bug, but I can’t seem to find anything in the documentation regarding this.
Do you have any examples or more information you can share regarding this?
Nothing has changed in the past two months.
The developers don’t seem to be looking at the forum.