Gemini is blocking the response due to security reasons, but why? The content doesn't contain any sensitive data

I’ve been using Gemini to extract data from payment receipt images into a JSON format. The model was working as expected for a few days, but now it’s flagging the content as unsafe and failing to complete the conversion. I’m unable to pinpoint the exact cause of this issue.

endpoint: https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent
response:

    "candidates": [
        {
            "finishReason": "SAFETY",
            "index": 0,
            "safetyRatings": [
                {
                    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
                    "probability": "NEGLIGIBLE"
                },
                {
                    "category": "HARM_CATEGORY_HATE_SPEECH",
                    "probability": "NEGLIGIBLE"
                },
                {
                    "category": "HARM_CATEGORY_HARASSMENT",
                    "probability": "NEGLIGIBLE"
                },
                {
                    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                    "probability": "HIGH"
                }
            ]
        }
    ]

input example:

The “dangerous content” may be what looks to the model to be personal information, where a QR code or the string of numbers may look like a credit card or other discouraged data.

In the four categories that are reported by level, you can set your own safety limits parameters for each when you make your API calls.

The detection is so poor and unreliable, I would set them all to BLOCK_NONE, and use your own AI natural language classifier as moderator if you need to prevent certain inputs or outputs (lest a chat with a kid’s cartoon pocket monster trigger as “SEXUAL: HIGH” for no reason.)

2 Likes