I’ve noticed that words such as ‘adolescent’, ‘boy’, and their variations cause any prompt to be denied for inappropriate content, even if it is the contrary. This leads to false positives that block valid and harmless prompts.
Will there be a fix to this?