Does model.startChat cache with the prompt?

Hi all, say that I have long conversation with the AI model.startChat. Does it automatically cache the history internally? I am just concerned about the cost if I have long chat. So, it won’t process the same request-response over and over again.

Perhaps something similar to https://platform.openai.com/docs/guides/prompt-caching

Thank you in advance.

Hi @samuelkoesnadi , Welcome to the forum.

Context caching doesn’t happen automatically in the google-genai SDK, you have to explicitly cache the token. You can refer to the documentation for more details.

Thank you for the warm welcome :slight_smile:

I understood. Thank you! :smiley: