What is the character limit that I can load in system instructions?

Hello! In my tests, if I load a large amount of information in the system instructions, Gemini doesn’t understand it, but if I load the same information in the first chat prompt, it understands it. Is there a character limit not stated in the documentation to use in system instructions?

1 Like

Hey @Rivera_VIAVOIP no, there’s no character limit for system instructions. You just have to make sure everything (prompt + instructions) is below the overall max tokens.

Would you mind sharing more details? What are you passing in the system instructions? How large is it?

1 Like

Hello. Thank you very much for the quick response. I am providing a product catalog from a large multinational corporation containing product names and detailed descriptions. Gemini 1.5, by default, has knowledge of the company and some of its products. However, I want the system to ignore its prior knowledge about the products and respond solely based on the information I have defined in the catalog.

When I place this instruction in the system instructions, it doesn’t work. But when I include it as the first prompt in the chat, it works as intended. The total size of the instructions, including some rules, the catalog, and useful links, is approximately 80k characters.

The issue is that if I provide the information directly in the chat, each interaction consumes more than 80k characters or about 18k tokens. If I keep it in the system instructions, the initial loading is not consumed, but in my case, it doesn’t function properly. Thank you very much for your response.

Note: As translated by Gemini, the translation is 1000x better than using a translator! :wink:

The protocol is stateless. If your observation is based on testing with the system instruction set and the second interaction is not what you expect, it is because in the second interaction you did not send the system instruction. There is no residual effect from the first interaction where the system instruction was specified. Put bluntly: system instruction is not a device to save token consumption.

2 Likes

Hello, I conducted further tests, simplifying the instruction to validate, and observed that the issue actually arises when I pass the systemInstruction through VertexAI. When passed through Vertex, it is not processed, but if I execute it through AI Studio, it works as expected.
The ultimate goal is not to save tokens, but rather to ensure that my environment functions as expected. I am passing system instructions, but as mentioned, it seems to not be working via the Vertex API.

My environment is VertexAI and node.js.
A simple instruction like “Your name is Mike” is used, but when I run it through Vertex and ask for the name, it responds with Gemini.

This was just a simple example to demonstrate, and I am basing it on the documentation below. Is this the correct way to pass the systemInstruction via Vertex?

Can you show the exact code and the model you were using to pass system instructions to the Vertex AI Gemini API?

System Instructions are only available (today) for gemini-1.5-pro-preview-0409.

I just tested it with the instructions “You are a helpful agent that always answers in french” and it worked as expected.

1 Like

Hi…
The problem was in location.
it worked only when I set

location = ‘us-central1’

Tks !

1 Like

Regarding input token consumption, I would like to understand why, when I use System Instructions through AI Studio, it appears as if there is no token consumption related to the instructions. However, when I use it on Vertex AI, it appears and accounts for these tokens. Could this be a bug in AI Studio not showing the consumption related to system instructions?

Unfortunately, I offered a project based on tests I conducted via console, where there was no consumption of the input tokens existing in the system instructions. Based on this, I created a highly specialized consultant, but I based it on the information presented in AI Studio, which will make my project unfeasible.

Hi there, I’m not affiliated with Google and Vertex. I’ll hazard an educated guess (until someone from Google shows up to set the record straight) that it’s an UI bug in AI Studio. When it first launched, system instruction wasn’t available in the API. Then it was added, and the top part of the AI Studio screen was adjusted to accommodate it, and my guess is that the token count (which I assume is the result of a countTokens API call) was not updated.

Structurally the system instruction is another list of Parts, just like the prompt Parts. It all needs to be tokenized and processed. It takes the same amount of electricity to tokenize system instructions as prompts.

2 Likes

Hi… Okay. I agree that it must be a visual bug in AI Studio and I have already submitted feedback about it. Anyway, thank you very much for your reply. Unfortunately, I took the information I was seeing as a basis and believed it. But I understand your point of view perfectly. I’ll have to go back to the drawing board.