[Bug Report] Immediate "Content Not Permitted" Error on 300k Token Codebase Input (Gemini 3 Preview) - False Positive

I am experiencing a consistent blocking issue when attempting to provide a large proprietary codebase (approximately 300k tokens) as context. Regardless of the prompt content, the request is rejected with a “Content not permitted” error almost immediately upon submission (approximately 2-3 seconds).

Environment:

  • Models: Gemini 3 Pro Preview & Gemini 3 Flash Preview
  • Interface: Google AI Studio
  • Input Size: ~300k tokens (well within the 1M context window).
  • Safety Settings: All categories set to “Block None”.

Steps to Reproduce:

  1. Load a pure code context of ~300k tokens (Proprietary source, unreleased, no dependencies).
  2. Send any prompt (for example, “Analyze this code” or “Explain the architecture”).
  3. Result: System returns “Content not permitted” within 2.5 seconds.

Troubleshooting Performed:

  • PII/Secrets Audit: I have verified the codebase contains no hardcoded secrets, API keys, PII, or credentials.
  • Content Audit: The code is 100% written by me. It contains no NSFW content, hate speech, malware patterns, or copyrighted text from external sources.
  • Quota Check: My account is nowhere near the RPM/TPM limits, this started to occur on the very first prompt of the day.

Hypothesis:
Given the speed of the rejection (<3 seconds for 300k tokens), it appears to be a false positive in the ingress safety filter (pre-processing) rather than the model’s generation evaluation. The filter seems to be flagging a specific pattern within the large context window erroneously.

Request:
Could a staff member please investigate the sensitivity of the ingress filters for large-context code uploads on the Preview models? The current strictness is rendering the long-context window unusable for legitimate development workflows.

Yeah, It seems a trigger in the Ingress Content Safety (ICS) layer, which operates before the model even “sees” the tokens - caused by 1) The “Malware Pattern” False Positive or 2) The “Copyright/Recitation” Pre-Check. You may try these steps in order to isolate the issue - 1) Split your 300k token upload into three 100k chunks. Upload them one by one. This will help you identify if a specific file (like a large generated protobuf file or a legacy utility class) is triggering the filter. and 2) Even though this is an ingress filter, adding a clear authorization statement to your System Instructions can sometimes bias the heuristic. and 3) If you have a Google Cloud project, move this task to Vertex AI Model Garden. Vertex AI uses a different “Enterprise” safety stack where you can explicitly disable the “Recitation” and “Safety” filters at a more granular level than the AI Studio UI allows.