Getting blocked by Safety Settings on UN reports

I work with UN reports and I have access to restricted Gemini safety filter text settings. I have set the safety settings to BLOCK_NONE. Gemini 1.5 pro still rejects some UN reports for example, a UNICEF report is rejected “block_reason”: “PROHIBITED_CONTENT”
Modified by Moderator
Could you please help? UN reports are dealing with difficult topics and I hope Vertexai / Gemini can help in this field.

vertexai.init(project=config['project_id'], location=config['location'])
    model = GenerativeModel("gemini-1.5-pro-001", system_instruction=[system_prompt])

    document = Part.from_uri(mime_type="application/pdf", uri=uri)
    generation_config = {
        "max_output_tokens": 8192,
        "temperature": 0,
        "top_k": 1.0,
    }
    safety_settings = {
        generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
        generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
        generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
        generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    }
1 Like

Usually if there’s a safety violation it’s possible to get a hint on which harm category was violated.
As far as I know you cannot set all the harm categories at once to NONE threshold: “if you try to set all categories to “None,” the API will retain its built-in safety mechanisms to filter out responses that fall into these sensitive categories.”
So I’d try to leave three of those at NONE level, and pick one (like the ‘sy explicit’) and set that to HarmBlockThreshold.BLOCK_ONLY_HIGH. This way you’d remain as permissive as possible. Let me know if that helped.