Hi all, say that I have long conversation with the AI model.startChat. Does it automatically cache the history internally? I am just concerned about the cost if I have long chat. So, it won’t process the same request-response over and over again.
Perhaps something similar to https://platform.openai.com/docs/guides/prompt-caching
Thank you in advance.