Seeking assistance with connecting GEMINI AI to WhatsApp

Seeking assistance with connecting GEMINI AI to WhatsApp

I’m currently working on a chatbot project that utilizes GEMINI AI and WhatsApp integration. However, I’m encountering challenges in connecting the trained GEMINI AI model to the server to enable it to respond coherently based on its training. The model is currently generating incoherent responses.

Could someone please assist me with this issue? I’ve included the code I’m using below:


image

image
image
Additional details:

  • I’ve trained the GEMINI AI model using a dataset of WhatsApp conversations.
  • I’ve implemented the integration with the WhatsApp Business API.
  • The server receives messages from WhatsApp users and sends them to the GEMINI AI model to generate responses.
  • The issue lies in the model’s generated responses not being coherent with the context of the conversations.

Any assistance in resolving this problem would be greatly appreciated.

Hey there and welcome to the forum!

Can you provide some examples of what it’s actual outputs are, and what output you intend for it to produce?

So, it seems you’re using a fine-tuned model. How did you set up the data to fine-tune this? Were you trying to develop a particular style? It’s likely that it was fine-tuned incorrectly, and data from its tuned dataset is affecting its output. From this explanation of what you intend to do, I do not see why a fine-tuned model is necessary here.

1 Like

Hi, thanks for your reply. The problem is that it does not produce the expected output, which is why I am trying to use a fine-tuned model. However, I cannot connect to the fine-tuned model. The goal is to implement a CHATBOT that serves as a virtual assistant for a company, allowing users to request information and ask questions through the WhatsApp app. I am looking to develop a particular style, but I am unable to connect to the fine-tuned model and it displays the following error:

I think the culprit is model.generateContentStream, in your async function run(), if I’m looking at the docs properly.

It should look like this:

response = model.generate_content("Write a cute story about cats.", stream=True)

source:

So, in order to stream content, you would need to pass stream=true as a parameter within generate_content, not as a separate function.

Otherwise, double check the fine-tuned model name for any spelling errors. That’s the first thing that comes to mind. I would check the parts variable here as well to ensure it’s setup correctly.