File Search + Structured Output + ThinkingConfig → nil response and no grounding metadata on Gemini 3

# File Search + Structured Output + ThinkingConfig → nil response and no grounding metadata on Gemini 3

## Summary

When using `gemini-3-flash-preview` with File Search (fileSearchStore), combining `responseMimeType`/`responseSchema` with `thinkingConfig` in `generation_config` causes the response to return:
- `nil` chat completion (no text content)
- Empty/missing `groundingMetadata.groundingChunks`
- Extremely high `toolUsePromptTokenCount` (~190K–235K tokens)

Either structured output or thinkingConfig works fine individually with file search. The bug only occurs when both are present simultaneously.

## Environment

- Model: `gemini-3-flash-preview`
- API: `v1beta/models/gemini-3-flash-preview:generateContent`
- Date tested: February 27, 2026

## Reproduction

All requests use the same base parameters:

```json
{
“model”: “gemini-3-flash-preview”,
“contents”: [
{
“role”: “user”,
“parts”: [{ “text”: “Create 5 tasks in this project: Design, Development, Testing, Documentation, and Deployment” }]
}
],
“tools”: [
{
“fileSearch”: {
“file_search_store_names”: [“fileSearchStores/”]
}
}
],
“system_instruction”: {
“parts”: [{ “text”: “You are a tool finder. Find relevant API tools via file search. Respond with JSON: {\“tools\”: [\“tool_name\”, …]}” }]
}
}
```

The file search store contains ~100 small markdown files (one per API tool, each with a name and description).

### Test 1: File search + structured output — WORKS

```json
{
“generation_config”: {
“responseMimeType”: “application/json”,
“responseSchema”: {
“type”: “object”,
“properties”: { “tools”: { “type”: “array”, “items”: { “type”: “string” } } },
“required”: [“tools”]
}
}
}
```

**Result:** Valid JSON completion, 5 grounding chunks returned, `toolUsePromptTokenCount`: ~1,596

### Test 2: File search + thinkingConfig — WORKS

```json
{
“generation_config”: {
“thinkingConfig”: { “thinkingLevel”: “low” }
}
}
```

**Result:** Valid JSON completion, 5 grounding chunks returned, `toolUsePromptTokenCount`: ~1,450

### Test 3: File search + structured output + thinkingConfig — BROKEN

```json
{
“generation_config”: {
“responseMimeType”: “application/json”,
“responseSchema”: {
“type”: “object”,
“properties”: { “tools”: { “type”: “array”, “items”: { “type”: “string” } } },
“required”: [“tools”]
},
“thinkingConfig”: { “thinkingLevel”: “low” }
}
}
```

**Result:**
- `candidates[0].content.parts[0].text`: **missing/nil**
- `candidates[0].groundingMetadata`: **missing entirely**
- `usageMetadata.toolUsePromptTokenCount`: **198,824** (vs ~1,500 in working cases)

### Test 4: Inconsistent across thinking levels

Same as Test 3 but varying `thinkingLevel`:

thinkingLevel Completion Grounding toolUsePromptTokenCount
`“low”` nil empty 198,824
`“medium”` valid JSON 5 chunks 28,057
`“high”` nil empty 234,865

`low` and `high` break. `medium` happens to work but with higher-than-normal token usage.

## Impact

- Any application relying on `groundingMetadata.groundingChunks` to extract file search results (e.g., for tool discovery via RAG) gets zero results when structured output + thinking are both enabled.
- The ~190K+ `toolUsePromptTokenCount` suggests the model is doing excessive internal work before failing silently.
- This forces a choice between structured output and thinking when using file search, even though each works independently.

## Workaround

Use `gemini-2.5-flash` without `generation_config` (no structured output, no thinkingConfig). Request JSON format via the system instruction and parse it from the text response. File search and grounding metadata work reliably in this configuration.

## Related

This may be related to the earlier issue where Gemini 3 returned file search as an external function call instead of executing it server-side: File Search Tool in combination with response schema not working