`groundingChunks` Missing from `groundingMetadata` in Gemini Responses

When using the Gemini models gemini-flash-latest and gemini-3.1-pro-preview, the response object no longer includes the groundingChunks field inside groundingMetadata.

groundingMetadata still includes fields such as:

  • searchEntryPoint
  • webSearchQueries

However, groundingChunks is no longer present.

We’re experiencing a complete loss of groundingMetadata in responses from gemini-3-flash-preview when using Google Search grounding, starting April 8, 2026. This was working correctly on April 7.

Environment:

  • SDK: @google/genai v1.48.0 (Node.js/TypeScript)

  • Models tested: gemini-3-flash-preview, gemini-3.1-pro-preview, gemini-3.1-flash-lite-preview, gemini-2.5-flash

  • Endpoint: Default (v1) — also tested v1beta with same result

  • All models affected

Request config:

{
  "model": "gemini-3-flash-preview",
  "config": {
    "temperature": 1,
    "maxOutputTokens": 2048,
    "tools": [{ "googleSearch": {} }],
    "thinkingConfig": { "thinkingLevel": "low" }
  }
}

Observed behavior:

  • toolUsePromptTokenCount: 5372 confirms Google Search IS being called and consuming tokens

  • candidates[0].content contains generated text (model produces content fine)

  • candidates[0].groundingMetadata is completely absent — not empty, not with zero chunks, just missing from the response

  • No groundingChunks, no groundingSupports, no webSearchQueries, no searchEntryPoint

  • Content is hallucinated (fabricated news stories) because grounding data isn’t being fed back to the model

What we’ve ruled out:

  • Not a thinkingConfig interaction — same result with and without thinking enabled

  • Not SDK-related — upgraded from v1.34.0 to v1.48.0, same behavior

  • Not model-specific — tested across gemini-3-flash-preview, gemini-3.1-pro-preview, gemini-3.1-flash-lite-preview, and gemini-2.5-flash

  • Not endpoint-specific — tested both v1 and v1beta

  • Occasionally 1-2 junk chunks appear ( Google Search ) but no real source URLs

Impact: Production newsletter delivery system that generates AI-curated newsletters with cited sources for paying subscribers. We’ve had to fall back to Perplexity as an alternative provider. All newsletters sent today lack source citations.

Raw response snippet (note: groundingMetadata is absent):

{
  "candidates": [{
    "content": { "parts": [{ "text": "...", "thoughtSignature": "..." }], "role": "model" },
    "finishReason": "STOP",
    "index": 0
  }],
  "usageMetadata": {
    "promptTokenCount": 78,
    "candidatesTokenCount": 219,
    "totalTokenCount": 7290,
    "toolUsePromptTokenCount": 5372,
    "thoughtsTokenCount": 1621
  }
}

This appears related to the regression reported in this thread. The toolUsePromptTokenCount confirms the search tool is invoked, but the metadata bridge between the search results and the response is broken.