Hey guys. I’m building a chatbot for hotels using Gemini and my clients have been experiencing some hallucinations from time to time. The prompt used is in the following format:
"
Hotel information: {HOTEL_INFORMATION_RELATED_TO_USER_QUESTION}
Question: {USER_QUESTION}
" .
In some specific scenarios, Gemini has been hallucinating and inventing information, even when the correct information for the answer is provided in the “Hotel Information” section. Do you have any ideas on what I can do to stop or at least decrease the number of hallucinations and wrong answers?
1 Like
Build out a RAG, and make sure you train it hard with the grids for fine tune.
When you need to know how to sue RAG with some examples in ICL which helps it with ICL.
-RAG
-Finetune
-Ground truth
This is the best you can do. Large corpus beasts will do better than smaller ones, for e.g. it will always allucinate, depends how you catch it.
you can use also hugging face transformer agents, which use the transofmrs singular function, it mbight b ecretin phtoos, or speaking etc etc… they use the power from the transasformer, and one part of it… so that is something you can use to check if its doing it right.
check of the hugging face blogs.
i get you though, if you sat and watched, your receptionist… she wouldd make more mistakes than tht modle…also give 900+ examples. larges the set the better in a wa. train hard cor.e
lastly train your RAG parts as well
retreiveers
rerkaners
embeddigns
etc etc…