I am writing this as a paying Google AI Ultra/Pro subscriber who, like many in the professional community, is reaching a breaking point. We are currently witnessing a frustratingly predictable pattern: Google releases an update, the system breaks for nearly a month, it achieves a brief window of stability, and then a new “silent update” resets the chaos.
For those of us relying on Gemini and NotebookLM for mission-critical workflows, the current state of the ecosystem feels less like a premium service and more like a perpetual, paid beta test. Specifically, over the last few days (early April 2026), we’ve encountered:
-
Metric Fetishism vs. Real-World Utility: While benchmarks may be climbing, actual usability is regressing. NotebookLM is suffering from “Source Blindness”—ignoring the very documents we upload—and vanishing citations, which defeats the entire purpose of a grounded RAG tool.
-
The “Ghosting” Prompt Issue: In Gemini Ultra, we frequently face “Prompt Ghosting” where requests are silently rejected or trapped in “Infinite Thinking” loops for simple queries.
-
Silent Quota Nerfs: Substantial reductions in usage limits and “thinking budgets” are being implemented without a single email or official notification. This lack of transparency is unacceptable for users who have integrated these tools into their production environments.
-
Regional Fragility: For users in regions like Southeast Asia, where subsea cable maintenance (APG/AAE-1) is already impacting latency, these server-side logic failures make the tools completely unusable.
Why is a global leader like Google so clumsy with these rollouts?
A $20 or $250 monthly subscription is not a donation; it is a contract for a reliable service. “Staged rollouts” should not mean “unannounced instability.” We are tired of the “corporate gaslighting” where status dashboards remain green while our workflows are on fire.
We demand:
-
Transparent Changelogs: No more silent updates. If quotas or model logic change, notify us.
-
Stability over Speed: Stop chasing weekly benchmarks if it means breaking core features like RAG grounding and context memory.
-
Better Communication: A dedicated status page for AI-specific logic errors (not just server uptime) and a clear path for professional feedback.
It is time to stop treating your most loyal, paying users as data points in an uncontrolled experiment. We need tools that work, not just models that win at tests.
Respectfully,
A frustrated but hopeful Power User.