What is the best way to persist chat history into file?

I’m looking for best ways to store chat history (manually) so it persists even if the script finished running

I couldn’t see a documentation or cookbook regarding storing the chat history into file.
While I can use “Get Code” and refer to the sample history parameter in start_chat method. I can append the prompt and response within the list of dictionaries where it had role/parts keys between the user and the model and store it either arbitrarily in pickle or serialize/deserialize it through JSON/YAML and load it again when it is executed,

Although the problem here is it influences the response as well (e.g. sending the same prompt from the history would give the same answer instead of different one) as well as the way how it should perform function calls. That’s because the ChatSession receives the updated history examples within that syntax and yes it influences response

Here, I’m using discord bot which uses Gemini API, sending the same prompt would result in same response, but it will be different when I gave different prompt

Prompt: “Who are you?”

I would use a database, with the text and timestamps. You can also embed these to retrieve relevant parts of history applicable to the current conversation.

You’ll note that the object returned by start_chat contains the entire history and, after you send the message, includes both the message and reply from the model. You should serialize this object (or the history object in it) so that you can pass the history to start_chat.

As for where and how you serialize it - that depends on the rest of your architecture and what you’re familiar with. I tend to use firebase since it lets me store the history clearly against a user and/or session id.

You mentioned influencing responses being a “problem”. The entire point of a per session chat history is that it can use past parts of the conversation to influence later parts.

2 Likes

Ok, I dumped the ChatSession.history to pickle database since that will be reloaded the next time I ran the script again to perform persistent conversational tasks

But the problem with this method is when using Files API then later manually delete the file uploaded within Files API which I believe the history referenced the file for multimodal use, it errors when it tries to reference the deleted file (manual or after 48 hours). That is when using the entire history from ChatSession object

Can be reproduced with

chat_session = model.start_chat(history=[],enable_automatic_function_calling=True)
chat_session.send_message("Who are you?")
chat_session.send_message("Are you a cat?")
chat_session.send_message("What is your favorite color?")

# Send multimodal input
file = genai.upload_file("IMAGE_FILE_PATH")
chat_session.send_message(["What is this", file])

print(chat_session.history)

file.delete()
chat_session.send_message("What did we discussed?")

I think solution might be backing up, deleting chat history, and retry when exception occurs without multimodal references from Files API. But I couldn’t think of proper working code considering ChatSession.history consists of list of objects which I’m having hard time figuring out


As for the previous methods of ways of saving chat history through list of dictionary of request/response role values which gets appended, it does work and yes I do say it can influence from it, but using dump/loaded ChatSession.history attribute as a history parameter for ChatSession does seem to work really well especially for automatic function calling use because its not influenced based on the history where the model sees to do something (from the previous function calling attempts) but it had no idea and automatic function calling sometimes fails to pick up sometimes

I can see there is a consistency issue between the stored history and the resuscitated history three days later, because the file references have been since wiped out.

Assuming your client code still has the files proper (images, I guess), you could try to “repair” the history object just now retrieved from database and before putting it into the ChatSession object. The repair would go like this - step through each row in history; if it’s fileData, use the file API to do a get_file (google.generativeai.get_file  |  Google AI for Developers  |  Google for Developers); if that fails you know that Pac-Man ate your cloud file, so resuscitate the file by using the file API to re-upload it; then modify the current fileData row to use this new uri (replace the old, dead one); continue until all rows are processed.

I have not coded such a procedure myself, but this or something equivalent is needed to restore the validity of the previously stored history.

I had implemented a temporary solution which what this does is to re-write the history when PermissionDenied exception occurs by re-iterating the existing ChatSession.history object and use the new history within the new ChatSession object history parameter without all the file references, although it involves some redundant code

:joy:, I later thought of that “repair” too, since it’s the only feasible repair if the files themselves are no longer available to the client. It is the amputation method: arm is sick, cut off arm

You can still do amputation in advance, though - you don’t need to get to where it throws an exception and then clip that Part. Walk through the rows and find the broken links (using get_file) and clip them. Then, pass the cleaned up history to the ChatSession object. Dealing with lots of exceptions will make your application logs difficult to use, they look messy.

Anyway, glad you’ve found a workaround.

The solution I keep coming back to (and what I’ll probably implement in LangChain.js) is to have an [in-memory] media manager that is responsible for:

  • Mapping a media object (or “permanent” URL) into a File API URL
    • Possibly by uploading it
  • Mapping a File API URL to the media object
  • Maintaining the cache expiration date of the File API URL

This way, it goes through the history when it is loaded and for each one

  • If it is a non-File API URL, turn it into a File API URL
    • If it is already in the cache, use the File API URL from the cache (but see below for cache validation)
    • If it is not in the cache, upload it to to the File API and store the map between the external URL and the File API URL
  • If it is a File API URL, make sure it is still valid
    • If it is not, re-upload it and store the map again

This way, the history or prompt can contain either the File API URL or something else and it can “do the right thing” before sending it to Gemini.

Yes! Client-side bidirectional map between the FileAPI objects and the objects proper (the permanent media objects) is the most comprehensive solution, agreed.

My hack (to preemptively call get_file) doesn’t cost tokens and such, but it does cost time that is unnecessary if the client knows because of the cached map that the FileAPI object is still around.
Good solution. And it does belong in the library code, application code like the Discord-bot shouldn’t have to deal with this :joy:

1 Like

Yeah, this seems right to me. The shape of the solution I keep coming back to is that the File API needs to be wrapped by some other file manager abstraction, and this abstraction should manage the URLs, transparently converting (uploading) them to File API URLs just before fetch.

4 Likes

Also does the Gemini API even has this OpenAI-like message human readable request/response dict history syntax for storing images based on URL? Equivalent syntax for ChatSession history parameter?

No. The Gemini API can only load images via URL from the File API (if you’re using the AI Studio API).

This is why in the “media manager” I outlined above, the first step is “if it is a non-File URL, turn it into a File API URL” by loading it.

There are some good reasons why you should be the one handling the URL and not Google. Not least of which is that more and more sites are concerned about Google or Open AI stealing their content to train future models and are blocking the bots that are doing so.