Change system prompt - is it possible?

Hi

I am building an application that uses gemini inside. Using the LLM functions capability, it decides which data needs to get from a database, and answer the question based on that information (RAG basically)

I have been using ollama before and i have always done that feeding that data to the system prompt, but in gemini, the way the system prompt is managed is a bit different, as you need to set first the model and the system prompt, and then, start the chat

model = genai.GenerativeModel('gemini-2.0-flash-exp', system_instruction=llm_prompt)
chat = model.start_chat(history = [])

def chatbot(prompt):
    response = chat.send_message(
        prompt
 )
    return response.text

Is there any way to modify the system prompt in the chat object? The only alternative i see is to create another instance of the model with a different system prompt, but it i looks a bit of overkill to me

Thanks

Hi @Javi_D_R, Welcome to the forum !!!

You can not change the system_instruction in between the chat.

System instructions is just to set the behavior of a model based on your specific needs to provide more customized responses, and adhere to specific guidelines over the full user interaction with the model. It will be part of your overall prompts.

Thanks.

Thanks for confirming.

Then, what is the best way to do RAG and feed the LLM different data depending on the prompt?

I use tools to retrieve that data from the DB, but how should it reach the LLM?

Thanks

Hey @Javi_D_R,

Here are few suggestions from my side :

  1. Include the retrieved context directly in the user prompt for each query
  2. Structure the prompt to clearly separate the context from the question
  3. Use a consistent format that helps the model understand what information to use.

But, feel free to explore more, iterate and use best practices of prompt techniques.

I’m never sure that I fully understand the issues I try to respond to but in the system I’ve written the system instructions are a just another variable. Does this make sense?:


  systemInstruction: { parts: [{ text: persona },], },
  // can be used by Gemini in place of { role: "model", parts: [{ text: persona }], },

  contents: [
    //{ role: "model", parts: [{ text: persona }], },
    { role: "user", parts: [{ text: prompt }], },
  ],

‘persona’ is a fairly complex personality which includes tone, emotion, defined personality etc formed and combined elsewhere. It can change and often does with each turn of the wheel.

1 Like

looks interesting, but this is for managing different personas. my problem is to inject custom data depending on the prompt

i can do as advised by Govind, but i think it will be good if gemini has some feature to be able to inject that info in an easy way. I think this will make the product more powerful

You had the solution in your original post already. Chat as implemented in the Google API interface doesn’t allow you to swap out system instruction mid-chat, but a ChatSession object (generative-ai-python/docs/api/google/generativeai/ChatSession.md at main · google-gemini/generative-ai-python · GitHub) is very lightweight. Make a new one with whatever system instruction you want, and move the history from old to new. All done.