A few prompt engineering questions

Just a couple of basic prompt engineering questions here.

(1) If specifying a system instruction in addition to providing a user prompt, is the system instruction simply pre-pended to the overall block of text that gets fed into the model? Or is it treated somehow differently?

(2) Is there any general guidance around how much instruction to stuff into the system prompt vs the user prompt? I.e., should the system prompt be complex and the user prompt simple, or, the system prompt be kept simple and the user prompt be complex?

(if it all just gets concatenated together then I suppose it doesn’t matter?)

Thanks!

Read Gemma formatting and system instructions  |  Google AI for Developers, you will be inspired.

Hi @BlankAdventure
System prompts are special instructions that set the model’s role for the entire conversation, not just a single message. Use the system prompt to define the model’s overall persona and rules. Use the user prompt for the specific task you want done right now. This approach ensures the model acts consistently while still letting you give it detailed questions.
Thank you

1 Like

I understand that. However for cases where the model is being used in a backend, for a specific task, this become more vague. For example if the model is being used for something like named entity recognition, then the instructions will be always fixed with the ‘user’ portion being only the chunk of text to analyze. In this case, the static instructions could live in the system instruction, or the prompt (since the task is the same every time: “extract author from the following text: {text}”

Sorry for the late reply. Yes for a specific, repeatable task, the distinction between system instructions and user prompts does indeed become blurred sometimes . Specifically for the NER task , it’s more efficient to place constant instructions in the system instructions.