gemini API have a safety feature especially this one:
in order to make them save, winch Harm category that have to be blocked more?
I know its gonna be the HARM_CATEGORY_SEXUALLY_EXPLICIT
and HARM_CATEGORY_HATE_SPEECH
that supposed to be the highest.
but i mean, if HARM_CATEGORY_HATE_SPEECH
supposed to be too high, what happens if the ai tries to make a joke or being kidding…?