Critical Regression: Gemini 3.1 Pro Update (Feb 19) Completely Broke NotebookLM’s RAG & Grounding

Hey everyone,

I’m a Pro/Ultra user and I need to raise a massive red flag about the current state of NotebookLM following the forced migration to the Gemini 3.1 Pro architecture around February 18-19.

Before the Lunar New Year (Tet), the system was actually quite stable and reliable for deep, multi-document research. Now, it feels like the model has been completely lobotomized for real-world tasks. It seems perfectly clear that the dev team optimized this update to hit high scores on synthetic reasoning benchmarks (like reaching 77.1% on ARC-AGI-2) while completely neglecting basic QA for messy, real-world RAG (Retrieval-Augmented Generation) workflows.

As a result, we are dealing with basic, borderline-silly technical failures. Here is a breakdown of the critical regressions:

1. Severe Source Blindness (Ingestion & Retrieval Failure) I can clearly see my uploaded documents in the sidebar, but the AI actively gaslights me, claiming the documents don’t exist or that the requested content isn’t in them. It looks like a massive index mismatch or a vectorization failure caused by the backend update. There’s also a documented bug where files exceeding roughly 380k words just silently fail to index properly, even though the official limit is 500k.

2. “Deep Reading” is Dead & The 2-Line Thinking Nerf Even when the system acknowledges a source, it refuses to actually read it deeply. I’ve noticed that the “Thinking” process (Chain-of-Thought) has been severely throttled down to exactly two lines for Pro/Ultra users. Because its “thinking budget” is artificially restricted to save compute, it skips the multi-step micro-drilldown needed to scan long PDFs and just spits out lazy, superficial summaries.

3. Hallucinations Over Grounding (Interpretive Drift) NotebookLM’s entire selling point is that it’s strictly “source-grounded”. But right now, when the retrieval step fails, the AI refuses to just say “I don’t know.” Instead, it performs “coherence repair”—fabricating logical guesses based on its general training data or blending information from completely unrelated documents in my notebook. It’s acting exactly like a standard, hallucinating chatbot.

4. Broken Multilingual Retrieval (The Language Bias) This has been an ongoing issue since launch, but it’s worse now. If I have a notebook with both English and Vietnamese sources, and I prompt it in Vietnamese, it heavily biases towards the Vietnamese documents. The semantic embedding model just clusters the prompt with the Vietnamese vectors and completely ignores highly relevant English sources. This “cross-lingual token bleed” makes the tool practically useless for non-English speakers trying to research complex English documents.

The Takeaway It honestly feels like Google shipped an experimental playground just to chase “Agentic AI” hype, completely sacrificing the precision grounding that made NotebookLM so great in the first place. We are paying for premium tiers only to deal with “Resource not found” errors, aggressive context pruning, and an AI that acts like a highly confident pathological liar.

Can we please get a “Stable Context” mode or a rollback option? We need a reliable, grounded RAG tool, not an unstable beta test.

3 Likes

Notebooklm´s grounding and retrieval system is indeed broken. I am a Google AI Pro-Ultra subscriber using NotebookLM with approximately 300 merged PDF sources for academic research. Since the Gemini 3.1 Pro update on February 20, full-notebook retrieval has been severely degraded. For example, when I ask the notebook for the conclusions of a specific article, the system replies that the information is missing, and that it can only access isolated fragments—such as a figure and a table. Worse, when pressed for authorship, the system hallucinates, presenting names cited in table footnotes as the paper’s authors. The same queries return complete, accurate results when I single-select the source file containing the paper. This retrieval and grounding failure did not exist before the update, and it undermines the value of the Pro and Ultra plans’ large-source capacity. If I have to select the source for each query, then notebooklm shifts from a research assistant to a PDF reader.

2 Likes

Man, you hit the nail on the head. I’m seeing the exact same thing on my end, and it’s beyond frustrating.

Check this out for a ‘smoking gun’ example: I have a source clearly titled ‘Selected Essays by Fukuzawa Yukichi’ selected in my notebook. When I asked a simple question about Fukuzawa’s model of the state, the AI straight-up told me: ‘Based on the provided sources, there is no mention of anyone named Fukuzawa.’

Instead of looking at the actual file, it started hallucinating and pivoting to Francis Fukuyama just because I happened to have other documents by him in the same notebook.

But here’s the kicker—and this confirms your point about the ‘single-select’ workaround: If I deselect every other source and only pick that one Fukuzawa file, the AI suddenly ‘wakes up,’ apologizes for the oversight, and gives me the right answer.

This is the core of the problem: NotebookLM has shifted from being a powerful ‘multi-source research brain’ to a manual, one-file-at-a-time reader. Having to spoon-feed the AI specific files completely defeats the purpose of the Pro and Ultra plans’ large-source capacity. If the RAG (Retrieval-Augmented Generation) can’t handle grounding across a whole notebook anymore, then the ‘assistant’ part of the tool is basically dead.

I really hopethe engineering team take this seriously. This is a massive regression that’s breaking workflows for those of us doing deep academic and research work.

1 Like

I can independently confirm this regression. Here is the report I posted on r/NotebookLM [https://www.reddit.com/r/notebooklm/comments/1rfefv1/gemini_31_pro_update_broke_fullnotebook_retrieval/\].

1 Like

Hi

Thank you for pointing this out, We have rolled our a fix for the same. Do let us know if you are still facing any issues

2 Likes

Confirmed! I’ve just re-tested the grounding with my specific research sources. The system is now correctly identifying and retrieving information from the ‘Selected Essays by Fukuzawa Yukichi’ source even when multiple documents are active.

It no longer hallucinates or pivots to unrelated authors as it did yesterday. The RAG pipeline seems to be back to its stable state. Thanks to the team for the quick fix - this is vital for those of us doing deep academic cross-referencing!

1 Like

Thank you for the response and the fix. I can confirm that full-notebook retrieval across my 300 sources is working again. The same query that previously returned only fragments and hallucinated authorship now correctly identifies the article, its authors, and its content. Deep retrieval and accurate grounding appear to be restored.

1 Like