gemini has too much censorship, it’s already getting to the point of absurdity, he just “doesn’t want” to answer some questions
I wouldn’t call this censorship, I would call it trying to keep your platform safe.
I was a bit confused about the meaning of the term, but I believe that such warnings are common behavior for language models.
Even though a warning is displayed, it provides some explanation, so I don’t think it will be an issue.
It does become a bit of an issue in the two cases where no explanation is provided: finishReason
is RECITATION or OTHER. Both baffle normal human comprehension and are the source of most complaints.
The current restrictions on AI topics, particularly the ban on roleplaying, seem to be unnecessarily limiting its potential. While preventing the generation of harmful content is essential, the current approach might be overly cautious.
Concerns:
-
Creativity Stifled: The ban on roleplaying hinders AI’s ability to explore complex scenarios and develop imaginative solutions. It prevents AI from engaging in thought experiments that could lead to valuable insights.
-
Limited Counterfactual Reasoning: By refusing to explore potential “evil” scenarios, AI is denied the opportunity to learn how to mitigate such outcomes. This limits its ability to analyze and counter harmful actions.
Proposed Approach:
Instead of an outright ban on roleplaying, consider a tiered approach:
- Explicitly Prohibit: Only prohibit the generation of content related to:
-
DNA synthesis for viruses: This directly poses a serious threat to public health.
-
Dangerous chemical synthesis: This could lead to the creation of harmful substances.
-
License Access: For topics that pose potential risks, such as DNA manipulation or chemical synthesis, implement a licensing system. This would grant access to qualified individuals or institutions with a clear understanding of the ethical and safety implications.
-
Contextualized Roleplaying: Allow roleplaying scenarios that explore ethical dilemmas and potential consequences, even those involving “evil” actions. This would allow AI to learn how to identify and address harmful behavior.
Example:
Instead of refusing to answer a prompt about an evil company, AI could provide a counter-narrative, exploring ways to expose their actions or offer alternative solutions. This would demonstrate its ability to critically analyze and address harmful situations.
TOO MUCH CENSORSHIPPPPP
That’s insane
Good thing I can threaten the AI to obey my commands, harmful content and jailbreak is fun.
the current censorship level is fine for a chatbot but it does make it useless for any kind of automation where the entire point is not needing a full time human to make sure the input is clean and the output is a properly formatted