I have fine-tuned Gemini 1.0-pro-001 using Google AI Studio with about 50 examples (Structured Prompt). When testing in the AI Studio, it performs quite well. However, when I use the API with the same hyperparameters, it returns different, lower-quality answers.
I call the model through the API with this payload:
{
"contents": [
{
"parts": [
{
"text": "my prompt here"
}
],
"role": "user"
}
],
"generation_config": {
"temperature": 0,
"top_p": 0,
"top_k": 1,
"max_output_tokens": 1000,
"response_mime_type": "text/plain"
}
}
However, I noticed that the code provided by AI Studio under the “get code” section has a different structure:
response = model.generate_content([
"input: my prompt here",
"output: ",
])
Could the difference in prompt structure be causing the issue, or could there be another reason for the discrepancy in the responses?
Thank you in advance for your help. I would be grateful for any assistance you can provide.