The Gemini API doesn't explicitly document a sendMessage() function

I saw an example using sendMessage() to implement multi-turn conversation at this link: https://ai.google.dev/gemini-api/docs/text-generation?lang=python#chat, where the model apparently maintains message history internally.

However, the API reference lacks further details on this feature. I’m unable to determine how to identify different chat sessions, how to clear the context, or how to check the current context length using this method. Does anyone know where I can find more detailed usage information?

2 Likes

Hi @ding_fow, To clear the context of the chat session you can clear the chat.history. As chat.history gives output as a list you can define the empty list to chat.history. For example, chat.history=[ ].

model = genai.GenerativeModel("gemini-1.5-flash")
chat_1 = model.start_chat(
    history=[
        {"role": "user", "parts": "Hello"},
        {"role": "model", "parts": "Great to meet you. What would you like to know?"},
    ]
)

response_1 = chat_1.send_message("I have 2 dogs in my house.")
print(response_1.text)
response2_1 = chat_1.send_message("How many paws are in my house?")
print(response2_1.text)

response3_1 = chat_1.send_message("how many dogs are there in my house?")
print(response3_1.text) #output: You said you have two dogs in your house.

#After clearing the context
chat_1.history=[]

response4_1 = chat_1.send_message("how many dogs are there in my house?")
print(response4_1.text) #output: As an AI, I have no access to your home and therefore cannot know how many dogs you have.

If you want to remove the specific context from the chat history you can remove that using the pop method.

chat_2=model.start_chat(
    history=[
        {"role": "user", "parts": "Hello"},
        {"role": "model", "parts": "Great to meet you. What would you like to know?"},
    ]
)
response_2 = chat_2.send_message("I have 10 dogs in my house.")
print(response_2.text)
response2_2 = chat_2.send_message("How many paws are in my house?")
print(response2_2.text)

response3_2 = chat_2.send_message("how many dogs are there in my house?")
print(response3_2.text) #output: You said you have 10 dogs in your house.

#removing the context of no,of dogs present in the chat history
chat_2.history.pop(2)

response4_2 = chat_2.send_message("how many dogs are there in my house?")
print(response4_2.text) #output: I don't know how many dogs are in your house.  You haven't told me.

You can use the usage_metadata to get the details about the input and output token count

response4_2.usage_metadata

#output
prompt_token_count: 125
candidates_token_count: 22
total_token_count: 147

You can keep the track of variables that you defined for each chat session(like storing them in a variable) to get the count of chat sessions.

Please refer to this gist for code example.Thank You.

1 Like

Thank you so much for your thorough and helpful answer; I learned a great deal. I also found in DeepSeek that chat.history has deeper usage, similar to official summary functions. I’d like to find more detailed official documentation, but the links provided by DeepSeek are unavailable. I’m also unsure if the functions or methods it described are actually real. Are there any such documents or an open-source code repository available? Thank you!