Hi everyone,
We are currently using gemini-2.0-flash-lite to classify and moderate user uploads (primarily images).
Our workflow consists of two main steps:
-
Classification: The API provides a detailed classification of the content (description, tags, and categorization into various safety buckets like pornography, violence, etc.).
-
Moderation: Based on these results, the AI evaluates the classified content and decides whether action needs to be taken.
With the upcoming sunset of gemini-2.0-flash-lite at the end of March, we intended to migrate to gemini-2.5-flash-lite. Unfortunately, we’ve discovered that 2.5-flash-lite (and 2.5-flash) seems unusable for this specific use case.
As soon as an image even remotely touches the context of pornography, the classification is rejected with [blockReason] => OTHER. This is particularly problematic because we process images in batches of 30. If a single image in a batch triggers this rejection, the entire batch is blocked, and we are left without any information as to which specific image caused the issue.
It is honestly disappointing to see Google release newer models that appear to have fewer capabilities or more restrictive filters than their predecessors.
We generally like Gemini and would prefer to continue working with the API. However, since this workflow no longer seems viable with the new models, we are now forced to consider switching to a competitor.