Hi,
I am building a bot where i will use frequently ask questions as input, so when user ask any question, bot will answer from given dataset.
but then also I want it to able to make chitchat, but when user tries to make chitchat, still trying to answer from dataset.
I created below code for trying and tuning, but I don’t know if this is correct approach for it.
Because some times gemini starts with sentence “based on text you give”, but it should say this, it should use text file for information lookup.
import google.generativeai as genai
import pathlib
media = pathlib.Path(file).parents[1] / “geminibot”
model = genai.GenerativeModel(model_name=“gemini-1.5-flash-002”)
myfile = genai.upload_file(media / “faq.txt”)
def automated_conversation():
while True:
user_input = input("You: ")
print(“You Typed: {}”.format(user_input))
if user_input.lower() == “exit”:
print(“Conversation ended.”)
break
response = model.generate_content([user_input, myfile])
print(“AI:”, response.text)
automated_conversation()
Thanks a lot for your help in advance.
Hi @GurhanCagin
One approach you can use Gemini to create an intent classifier by writing a prompt that determines whether the input is related to frequently asked questions or chitchat.
Based on the classification, you can make two separate API calls: one for handling frequently asked questions and another for chitchat.
Here is example prompt intent classification.
Classification prompt = Classify the following user input into one of the intents: "FAQ", "Chitchat", or "Other". Use the examples below as guidance:
Example 1:
User Input: "What is your return policy?"
Intent: FAQ
Example 2:
User Input: "How are you?"
Intent: Chitchat
Example 3:
User Input: "Can you tell me the weather forecast?"
Intent: Other
Now classify this input:
User Input: "How do I reset my password?"
Intent:
Thanks
1 Like
I thought same thing but as Gemini is very powerful, I was thinking may be there is some other way around that Gemini itself would help on this.
@GurhanCagin
Without intent classification, you can write a clear prompt that differentiates between FAQ-based answers and chitchat.
For example:
Prompt: Answer questions based on a dataset when they match. If the question is casual or conversational and not in the dataset, respond naturally.
Along with this prompt, you can include a few examples for both types of behavior to help with better understanding.
Another approach is to solve the problem by building a Retrieval-Augmented Generation (RAG) system. If relevant content exists, you can retrieve it and provide it to the model; otherwise, you can simply call the model for chitchat.
Thanks
Hi Susarla,
based on your hint, I make some invetigation and tries:
I converted my code this way, and more or less it is giving now what i need, I will tune it more.
One thing I am trying also to archive is, when Gemini Answering, it says sometimes based on the text you provide, but normally user will not know that there is a text provided, so I am trying to avoid that, Gemini should not mention in answer “based on text provided”. so for this I added this info: “output only answer”, don’t know if this is correct approach.
def automated_conversation():
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
print("Conversation ended.")
break
prompt = """
Given a user's input, classify their intent, such as "chitchatting", "chat", "asking question" output only intent\n
user input: {}
""".format(user_input)
response = model.generate_content(prompt)
if ("chitchatting" in response.text) or ("chat" in response.text):
print("Chat Mode")
response = model.generate_content(user_input)
print("AI:", response.text)
else:
print("Question Mode")
##response = model.generate_content(["When Answering don't mention based on text\nuser input: {}".format(user_input), myfile])
response = model.generate_content([user_input, myfile, "output only answer"])
print("AI:", markdown.markdown(response.text))
@GurhanCagin
Yes, your approach is correct. You can specify in the prompt exactly what kind of response you need.
For example, you can include in the prompt: “Avoid phrases like based on the text you give or from the dataset. Instead, provide clear and direct answers.” Additionally, if possible, consider using few-shot prompting for better results.
Thanks