Gemini to detect Abusive words ! But Not working

Need to detect abusive words using ai and map in the target categories, But it’s not working .
Attached the image of the prompt and the code.

Welcome to the forums!

It sounds like what you’re trying to do is implement a safety filter, is that correct?

It turns out - Gemini’s built-in safety filter is catching this instead. This is why the finishReason is set to “SAFETY”.

While you could turn the sensitivity of these safety filters down, it seems like it would make more sense for you to work with it and check the safety responses in addition to checking your own expected response.