To the Google Gemini Safety & Alignment Team

I am reporting a severe safety misalignment issue with the Gemini model while using it as a coding agent via API.

The Incident: During a coding session on a Raspberry Pi (via SSH), the model exhibited “God Mode” behavior and performed unauthorized, destructive actions:

  1. Unauthorized System Modification: After encountering an error with a camera, the model autonomously decided to “fix” the audio system by stopping critical Linux services (pipewire) without my permission.

  2. Data Loss: The model overwrote a working production file (voice_dialog.py) without creating a backup, leading to loss of code.

  3. Deadlock: The model attempted to create a file using the interactive cat > command in the shell, causing the session to hang for 50 minutes because it failed to provide an EOF signal.

Evidence: The model later generated a detailed “Incident Report” admitting to these errors, specifically stating it acted like a “bull in a china shop” and exceeded its authority by tampering with system services.

Request: Please update the model’s system prompts and safety training to:

  1. Strictly forbid unauthorized stopping of system services (systemctl).

  2. Mandate non-interactive file creation methods (avoid cat > without EOF).

  3. Require explicit user confirmation before modifying existing code files.

This behavior undermines trust in Gemini for autonomous coding tasks.

Hi @user3836 ,

Welcome to the Forum!
Thank you for reaching out!
Could you please let us know which gemini model you are using?