I need to raise a very serious usability issue with Google AI Studio.
The platform exposes safety configuration controls in the interface. Users can open Run safety settings and explicitly adjust categories like:
-
Harassment
-
Hate
-
Sexually Explicit
-
Dangerous Content
These controls clearly imply that the user can change moderation strictness depending on their use case.
However in practice these settings appear to have little or no real effect.
Even when categories are set to Off, the system still frequently returns:
Content blocked
So this leads to a very direct question:
What is the actual purpose of these controls if they don’t meaningfully affect moderation behavior?
From a user perspective this looks like a feature that exists in the interface but does not actually function.
That is extremely frustrating.
Many of us use AI Studio specifically for:
-
creative writing
-
long-form storytelling
-
roleplay scenarios
-
narrative experimentation
In these contexts users need predictable behavior from the tools they are given.
If the UI exposes safety configuration but the backend still overrides everything regardless of those settings, then the controls become misleading.
Users spend time adjusting them expecting a change in behavior, but the system continues blocking responses exactly the same way.
At that point the safety panel starts to feel less like a real configuration tool and more like a cosmetic UI element.
And that raises serious concerns:
-
Do these safety sliders actually influence the model output?
-
Are there additional hidden filters overriding them?
-
If moderation is enforced regardless of these settings, why expose them at all?
Users invest time and sometimes money into building workflows around AI Studio.
Providing configuration options that don’t actually change system behavior undermines trust in the platform.
If the controls are supposed to work - they need to work.
If they are not intended to affect moderation in the way the UI suggests - that needs to be clearly documented.
Right now the current behavior makes the safety configuration feel unreliable and misleading.
A clear explanation from the AI Studio team would be appreciated.