I am experiencing a consistent blocking issue when attempting to provide a large proprietary codebase (approximately 300k tokens) as context. Regardless of the prompt content, the request is rejected with a “Content not permitted” error almost immediately upon submission (approximately 2-3 seconds).
Environment:
- Models: Gemini 3 Pro Preview & Gemini 3 Flash Preview
- Interface: Google AI Studio
- Input Size: ~300k tokens (well within the 1M context window).
- Safety Settings: All categories set to “Block None”.
Steps to Reproduce:
- Load a pure code context of ~300k tokens (Proprietary source, unreleased, no dependencies).
- Send any prompt (for example, “Analyze this code” or “Explain the architecture”).
- Result: System returns “Content not permitted” within 2.5 seconds.
Troubleshooting Performed:
- PII/Secrets Audit: I have verified the codebase contains no hardcoded secrets, API keys, PII, or credentials.
- Content Audit: The code is 100% written by me. It contains no NSFW content, hate speech, malware patterns, or copyrighted text from external sources.
- Quota Check: My account is nowhere near the RPM/TPM limits, this started to occur on the very first prompt of the day.
Hypothesis:
Given the speed of the rejection (<3 seconds for 300k tokens), it appears to be a false positive in the ingress safety filter (pre-processing) rather than the model’s generation evaluation. The filter seems to be flagging a specific pattern within the large context window erroneously.
Request:
Could a staff member please investigate the sensitivity of the ingress filters for large-context code uploads on the Preview models? The current strictness is rendering the long-context window unusable for legitimate development workflows.