Hi, I’m working on a game project that generates NPC conversations. On the technical side things work fine. This is more of a policy question / question about what other people have experienced.
Essentially I’m trying to figure out how to think about the following things:
- The usage policies say that we shouldn’t be making applications for people under 18
- There are API-level safety settings that are adjustable, but policies also state that I’m responsible for prompt content and how users use the generated content returned.
- The policies seem to indicate that if the safety settings are anything but defaults, that the application is subject to review and approval
So, from a CYA standpoint, I can totally understand why Google’s legal policies would be phrased this way. I guess I’m wondering if making in-game content with generated AI is essentially against the use policy if your game is targeted at 18+ year olds. To be clear, the game I’ve made isn’t trying to be edgy or push any specific boundaries, and I’m not trying to see what I can get away with. But I also can’t control what players will type in, nor what the Gemini models will reply with. So, I’m trying to understand where the risks / lines are with acceptable use.
Here’s perhaps a more concrete example. By default, the API seems to generate content that would be okay for a game that’s rated T for “Teen”. But is there a way to bump that up to the equivalent of an M-rated game (again, I’m not trying to make pornography or anything similar, more like a game that has content similar to other M-rated games).
Do I just put disclaimers in my game saying LLMs are experimental, you should be 18+ to play, and users are responsible for their prompts? No idea if that would satisfy someone reviewing my application’s use on the Google side, but that’s what I’m trying to understand better so I don’t get my game banned.