System Instructions not working?

So i have been using Gemini 1.5 Pro, the only problem is that with Instructions that require the Model to avoid repetitive terms such as ‘a Testament’ ‘Stark constrat’ it keep generating words from the black list either way. As a matter of fact, i have been reporting this bug douzens of times everyday to the Google company, and i did not recieved any answers yet, is this normal?

“Is this normal?” The bug? No. The lack of effective response from Google? Sadly, yes…

2 Likes

Hey! Can you share the system instruction and prompt you are using?

Sure, there is it, as you can see it should be able to listen to the instructions and block all of the repetitive words mentioned, yet this is getting ignored most of the time. At such, i had to report feedback everytime.

As for the prompts used for examples.

‘Explain how difficult it would be to funding an organization such as real-life XCOM, a project prepared and supported by mutiple countries in case of an alien invasion like Advent.’

And.

‘Especially specific XCOM operatives that get all of thier limbs cutted leaving only the torso and head intact to be fused to a MEC Suit in order to combat the alien threats with much efficiency would be a nightmare too, and let’s not forget the XCOM operatives that get fucked in the head with augment of Psionic in thier minds. Both augment are a mean to counter advanced threats that are just too resistant or likely deadly because of thier psionic abilities, and the violent counter attack of the Elders for daring to use what they have themself, Psionic.’

Okay, so, the issue here doesn’t have to do with the model, but rather about understanding the quirks of language models in general. This is not an issue specific to Gemini, and I have seen this exact issue discussed in the OpenAI developer forums.

By providing a “blacklist” of terms, you are accidentally priming the model to use those exact words. Language models don’t do well with “Don’t talk about __!”. By mentioning it at all, and telling it what not to focus on, it will focus on those aspects more. Instead, you have to tell it what you do want, and show examples of such things without those specific vocabulary words. You are mentioning determination like, 20 times here, and even my human brain now can’t get the word “determination” out of my head :sweat_smile:.

There is no other way around this. Providing a list of things not to say is not going to work, and there is no way to get such a request to work unless you fine tune it for a specific style that omits these vocabulary terms.

3 Likes

@Macha is spot on with this, but I want to re-emphasize this point.

This is known as “negative prompting” and LLMs are VERY VERY bad with it.

Fundamentally, LLMs are pattern machines. They attempt to come up with the next token/word statistically based on what has come before.

By mentioning a certain word so many times before, even if you mean it in a “avoid this” way, you’re setting up the pattern to use it again.

2 Likes

Well, they could just make Gemini compatible with negative prompting, it’s not that hard to do so on their own term i suppose.

Outstanding analysis, and spot on. I have noticed the exact same problem regarding image generation on numerous platforms. In order to avoid such issues, I have relied on a “hierarchy” structure of negative terminology: “No deformed body”; “No deformed head”; “No misshapen nose”; “No misaligned eyes”; … ; “No extra arms”; “No extra hands”; “No extra fingers”; and so forth.

This hierarchy in terminology has been very useful for me in avoiding the “Don’t think about ___!” issue which, invariably, results in the LLM producing the very thing you don’t want it to produce.

I don’t know if this will help in RpgBlaster’s particular situation, or not, but it has proven useful for me. Additionally, I am curious how much of the issues Blaster is experiencing may be induced by trying to get Gemini to produce material it has specifically been trained to NOT produce; viz., “gore”, “violence”, “bloodshed”, and so on… Your thoughts?

Matthew

2 Likes