Severe Stability Issues & UX Regressions in Gemini Ultra / NotebookLM (Report from SE Asia)

,

Hi everyone,

I’m writing this because I’ve hit a wall with the current state of the Google AI ecosystem. I’m a Gemini Ultra subscriber based in Vietnam (Asia), and while I rely heavily on these tools for deep research, the experience over the last few days (specifically leading up to today, April 3rd) has been incredibly frustrating and, frankly, unusable for professional workflows.

I’ve noticed a cluster of issues that seem to align with the recent system-wide updates and the Gemma 4 rollout, but there are some very specific “quality of life” bugs that are breaking my productivity:

1. NotebookLM: “Source Blindness” and Parsing Nightmares

  • Shallow Retrieval: NotebookLM has become “lazy.” It’s no longer digging deep into my uploaded sources. Instead of the robust RAG (Retrieval-Augmented Generation) we’re paying for, it feels like it’s just skimming the surface or relying on internal training data.

  • The Citation Vanishing Act: Many responses are missing the standard source anchors entirely. For a tool whose USP is “groundedness,” this is a dealbreaker.

  • Mid-stream Cutoffs: I’m frequently getting responses that just… stop. Halfway through a sentence, the AI quits, leaving me with incomplete data and no way to “continue” effectively without losing context.

  • The TXT-to-Markdown Mess: This is a big one. When I upload.txt files, NotebookLM seems to force-convert them into a single, massive Markdown block. It strips all line breaks and merges everything into a “wall of text.” It’s a nightmare to read and clearly affects how the model indexes the information.

2. Gemini Ultra: Prompt “Ghosting”

  • Input Field Bouncing: In the main Gemini interface, I’ll send a complex prompt, the “thinking” animation starts for a split second, and then everything abruptly stops. The command isn’t sent, no response is generated, and the text I just typed simply jumps back down into the input box as if I never hit enter. It’s like the backend is rejecting the request silently.

3. Latency & Regional Context Being in Southeast Asia, I’m aware of the ongoing subsea cable issues (AAE1/APG) affecting our bandwidth, but these feel like server-side logic failures rather than just slow internet. The “Infinite Thinking” loop is real—I’m waiting minutes for simple queries that used to take seconds.

Is anyone else on the Ultra plan seeing this level of instability? It feels like the system is buckling under the weight of the new model deployments, and the “premium” experience is currently feeling very “beta.”

Would love to hear from the dev team if there’s a timeline for a stability patch, especially regarding the NotebookLM file parsing and the prompt ghosting issues.

Stay productive (try to),

1 Like

Just a quick follow-up to my original post: I’ve found that using ‘hardened’ custom instructions to force a stricter RAG (Retrieval-Augmented Generation) protocol can temporarily mitigate the ‘Source Blindness’ issue, making deep research somewhat viable again. However, this is just a stopgap and doesn’t address the underlying server-side stability crisis we’re currently facing.

The broader issue here is how Google treats its Ultra subscribers. Many of us are paying the premium price because we need a higher quality of intelligence and more robust retrieval for professional workflows, not just a higher daily query count. Right now, it feels like the Ultra plan is being treated as a ‘volume’ upgrade rather than a ‘quality’ upgrade. If the actual reasoning depth and grounding accuracy aren’t significantly better than the Pro tier—which has been struggling with hallucinations and shallow summaries recently—then the value proposition for Ultra starts to crumble.

Google needs to understand that for deep researchers, a single high-quality, perfectly grounded response is worth more than a hundred fast, superficial ones. We need the system to prioritize ‘thinking budget’ and retrieval integrity for the Ultra lane, especially when the infrastructure is under heavy load from new model rollouts like Gemma 4.

1 Like