Hi team,
I’m experiencing occasional discrepancies when running the same prompt and image through Gemini multiple times—even with temperature=0
, top_p=1
, top_k=1
, and candidate_count=1
. For example, out of 10 identical runs, 2 yield different (incorrect) results. Is this due to internal nondeterminism in the model or OCR steps? Are there any official recommendations or documentation on ensuring deterministic behavior?
Thanks!
@Devi_Kathirvel ,
welcome to the forum,
as you rightly mentioned above, there is a certain level of non-deterministic behaviour for any llms.
along with temperature , top_p, top_k , you can also make this behaviour more confined with better prompting.
you can use few shot prompting and meta prompting and may be use structured output to get the responses in a unique json object as to avoid the chance of confusion or error for the model.
thankyou
1 Like