Hello,
Multiple requests on an audio with a prompt may produce different results(ASR task).
Set temperature=0
How to avoid unstable output ?
Thanks.
Hello,
Multiple requests on an audio with a prompt may produce different results(ASR task).
Set temperature=0
How to avoid unstable output ?
Thanks.
Stable processing output from LLMs just doesn’t happen. You will see variability in processing with any LLM you use. You can, however, in general narrow it by reducing the temperature, which might also make the output less human-sounding and more robotic. You should test to find what is the best setting for your application.
https://github.com/GoogleCloudPlatform/generative-ai/issues/289
you should be able to set a seed like this, but gemini doesn’t support that afaik