Consistant output with Gemini AI model 1.5 Flash

REST Endpoint = https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent

generation_config = {‘temperature’:2.0,
‘maxOutputTokens’:8192,
‘topP’:0.10,
‘topK’:1,
‘response_mime_type’: ‘text/plain’, #‘application/json’
#‘response_schema’: response_schema
}

I have given System Instructions as much as I can, as well as Sample Output, but each time I used to see different response values for same Data Query.

I had tried with Json Schema as well, but not good luck.

Can you please suggest if Needs to change any Temparature/topP/topK settings on above configuration.

Please guide.

Hi @harshal_sarode,

Welcome to the forum!

Temperature controls the randomness. Higher values (like 2.0) make the model more creative and random, which often results in different outputs for the same query. Lower values make the model output more deterministic and consistent. Try to keep it close to 0.

1 Like