Different Responses in AI Studio and API for Fine-Tuned Gemini 1.0 Model

I have fine-tuned Gemini 1.0-pro-001 using Google AI Studio with about 50 examples (Structured Prompt). When testing in the AI Studio, it performs quite well. However, when I use the API with the same hyperparameters, it returns different, lower-quality answers.

I call the model through the API with this payload:

{
    "contents": [
        {
            "parts": [
                {
                    "text": "my prompt here"
                }
            ],
            "role": "user"
        }
    ],
    "generation_config": {
        "temperature": 0,
        "top_p": 0,
        "top_k": 1,
        "max_output_tokens": 1000,
        "response_mime_type": "text/plain"
    }
}

However, I noticed that the code provided by AI Studio under the “get code” section has a different structure:

response = model.generate_content([
  "input: my prompt here",
  "output: ",
])

Could the difference in prompt structure be causing the issue, or could there be another reason for the discrepancy in the responses?

Thank you in advance for your help. I would be grateful for any assistance you can provide.

I just resolved it. It was indeed a problem with the structure of my payload. I changed it to this:

{
    "contents": [
        {
            "parts": [
                {
                    "text": "input: my prompt here"
                },
                {
                    "text": "output: "
                }
            ],
            "role": "user"
        }
    ],
    "generation_config": {
        "temperature": 0,
        "top_p": 0,
        "top_k": 1,
        "max_output_tokens": 1000,
        "response_mime_type": "text/plain"
    }
}

Now the responses are consistent with what I observed in the AI Studio. Hope this helps anyone who had the same problem.

2 Likes