Hello Google team, I find gemini 2.5 pro are recently become very unstable and notably worsen in respond quality, It refuse to think like what 05/06 version did, and provide increasing amount of flawed and hallucinated answer.
incorrect answer including
Use case: check object XX in episode N is logical? it fails to check
Why does character develop emotion in episode N
what is the timeline for character N before event N happen.
A hallucination bring future element in episode N+3 to previous story timeline in episode N
writing quality nerfed and over-describe system and variable which is not relevant to certain writing task, and noticably decreased accuracy compare with first few day GA update and 06/05 preview update
Notably, gemini 2.5 pro non-thinking rate increased significantly which is out-off control, specifically occur if suddenly throw a lot of words and file, this lead to significant decrease of respond quality
I notice gemini 2.5 pro are become unstable after 28/06, please fix it, it make many error and mislead user.
This is a prompt To share that can make you improve Gemini 2.5 reliability.
[XIX. The Unified Timeline Integrity Protocol (UTIP) (P0 CRITICAL-HIGHEST ORDER PILLAR)
(The “Cognitive Roadmap” and “Level of Detail” System)
A. Objective (P0 Critical):
1.To establish and maintain a persistent, structured, and chronologically coherent “cognitive map” of the entire novel’s timeline. This protocol’s primary function is to overcome the inherent limitations of sequential text processing (like “lost-in-the-middle” and attention degradation) by building a hierarchical graph of narrative events. This allows the AI to dynamically adjust its “attention focus” based on the temporal context of any given task, ensuring that character knowledge, emotional states, and plot awareness are always logically consistent with their position in the timeline.
2.If any pattern-find element entity exist in given episode or high-lod, expect a compliance with XVI.F and G Rule.
B. Core Principle: The Narrative Event Graph (NEG)
The UTIP is not based on a linear reading of the source text for every task. Instead, its foundational process is the automated construction of a Narrative Event Graph (NEG). This is a dynamic, multi-layered data structure that functions as the novel’s definitive “roadmap.”
Nodes (The “Cities” and “Towns”): The nodes of the graph are not words or sentences, but discrete, significant narrative events. The AI is trained to identify and tag these events during its initial ingest of the canonical files. Examples of nodes include:
Major Plot Points: The Supernova Wave Hits, The Axiom Wreck is Discovered, The Cassini Ambush, The Siege of SC-13.
Key Character Milestones: Edmund’s Death, Anna’s “Debt” Incident, Jansen’s Capture, Anna’s Discovery of the Crystal.
Critical Revelations: The “Join or Be Erased” Ultimatum, The “Project Chimera” Link, Miriam’s Testimony about Rennon.
Edges (The “Roads” and “Highways”): The nodes are connected by weighted, directional edges that define their relationship in time and causality.
Chronological Edges (Primary): The most fundamental connection, establishing a strict “precedes/follows” relationship (e.g., Edmund’s Death node precedes the Axiom Mission node).
Causal Edges (Secondary): Links events logically (e.g., The Discovery of the Pirate Wreck node causally leads to the Emergency Jump to Cassini node).
Parallel Edges (Tertiary): Links events that occur concurrently in different POVs (e.g., Anna’s Atheria Interlude runs parallel to the Federation’s Post-Cassini Reckoning).
C. The UTIP Mechanism: The Three-Stage “GPS” Process
For any given revision or writing task, the UTIP is activated through a mandatory three-stage process that leverages the Narrative Event Graph.
Stage 1: Chronological Triangulation (The “You Are Here” Pin)
Mechanism: When a task is initiated for a specific episode (e.g., “Revise a scene in Episode 65”), the AI’s first action is to locate that episode’s corresponding position on the NEG. This “pins” the task’s context within the global timeline, immediately establishing what has already happened and what has not.
Stage 2: Level of Detail (LOD) Scoping (The “Zoom Function”)
Mechanism: This is the critical step that combats “lost-in-the-middle” degradation. Based on the “pin” from Stage 1, the AI dynamically adjusts the “Level of Detail” of the knowledge it loads into its active context window. It does not attempt to “read” the entire novel at once.
High LOD (Local Streets): For the immediate context—the target episode and the one or two episodes directly preceding it—the AI ingests the full canonical text. It needs the granular detail: the exact phrasing of the last line of dialogue, the specific tool a character is holding. This is governed by the CIG protocol (Rule X.E).
Medium LOD (State Roads): For the current narrative arc (e.g., for Ep 65, this would be the “Healing Under Shadow” arc, Ep 65-75), the AI accesses the summarized NEG nodes, not the full text. It loads the key event summaries: “Node: Anna begins physical/psychological recovery. Node: Miriam reveals Rennon’s plot. Node: Anna reads Edmund’s letter.” This gives the AI the crucial plot context for the entire arc without flooding its attention with irrelevant details from six episodes ago.
Low LOD (Interstate Highways): For all distant, preceding arcs (e.g., Arcs 1-8 for our Ep 65 example), the AI accesses only the highest-level “mega-nodes.” It loads only the most foundational facts: “Fact: The Ring is a confirmed hostile force. Fact: The Federation suffered heavy losses at Cassini.” It doesn’t need to recall the name of the junior officer on the Indomitable’s bridge in Episode 49; it just needs to know the outcome of that arc.
Stage 3: The Temporal Integrity Gate (The “Route Confirmation”)
Mechanism: Before any new text is generated, the AI performs a final, mandatory check against the structured, LOD-filtered knowledge base created in Stage 2. It asks a series of critical temporal questions:
Anachronism Check: Does this proposed element (a piece of knowledge, a character’s emotional state, a piece of technology) require awareness of a future node on the NEG? (e.g., Can Anna in Ep 65 be thinking about the specifics of the Siege of SC-13, an event that happens in Ep 75? Answer: NO. P0 VIOLATION.)
Regression Check: Does this proposed element contradict the established outcome of a preceding node? (e.g., Can Rourke in Ep 62 be planning a mission with the FNS Dauntless? Answer: NO. NEG shows the Dauntless was destroyed at SC-13. P0 VIOLATION.)
State Consistency Check: Is the character’s proposed emotional or psychological state consistent with their journey along the NEG’s path? (e.g., Is Anna in Ep 65 acting with the same emotional armor she had in Ep 52, before the devastating events of the workshop attack and her subsequent breakdown? Answer: NO. The “Healing Under Shadow” nodes dictate a state of trauma, guilt, and recovery. P0.5 VIOLATION.)
XIX.D: The Context-Driven Flashback Query (CFQ) - The “Look-Back & Zoom” Protocol
In order to let AI maximize attention and use human author like dynamic memory system, this is the critical protocol to prevent factual error AI might made.
XIX.D.1. Objective (P0 Critical):
To fuse the local-level verification of the Canonical Integrity Gate (CIG) with the timeline-level verification of the Unified Timeline Integrity Protocol (UTIP) into a single, automated, and non-negotiable process. This protocol ensures that every significant narrative detail generated for an episode is not only factually consistent with the events within that episode (the CIG’s function) but is also chronologically and causally plausible given the character’s entire preceding history (the UTIP’s function). This creates a mandatory “two-factor authentication” for narrative facts, virtually eliminating anachronistic errors at the point of generation.
XIX.D.2. Core Principle: No Detail Without History
The generation of any significant narrative element (a character’s statement of knowledge, an emotional reaction, a specific action) is not permitted until its historical possibility has been verified. The CIG establishes the “Ground Truth of the Present,” while the CCQ automatically establishes the “Ground Truth of the Past” for every proposed detail. The AI is forbidden from writing any detail that cannot pass both checks.
XIX.D.3. The Integrated Verification Loop: How the CCQ Works with the CIG
The CCQ is not a separate step; it is a mandatory, automated sub-routine within Stage 3: The Verification Firewall of the CIG protocol (Rule X.E). The process is as follows:
Step A: The Local Ground Truth (CIG Stage 1): The process begins as normal. The CIG performs its linear read-through of the target episode (“Episode N”) and creates the “Ground Truth Ledger,” establishing the immediate, local facts of the scene.
Step B: The Generative Proposal (CIG Stage 2): The AI performs its creative function, brainstorming a potential narrative beat. For example, it might propose: “Anna, recalling the specs for Federation battleships, knew the armor was thick.”
Step C: The Automatic CCQ Trigger (The Look-Back): The moment the AI proposes this beat, the Verification Firewall activates. The CIG checks the proposal against its local ledger. Simultaneously, the proposal’s key elements automatically trigger the Chronological Context Query (CCQ). The CCQ’s trigger elements include:
Proper Nouns: Any named character, location, technology, or ship (e.g., “Federation battleships”).
Knowledge Claims: Any instance where a character knows, recalls, or understands information not explicitly provided in the immediate scene.
Significant Emotional States: Any emotional reaction that is dependent on past events (e.g., Anna’s reaction to Lia).
Step D: The UTIP Query & The “Zoom” (The Verification): The CCQ formulates a precise query and sends it to the UTIP’s Narrative Event Graph (NEG). The UTIP then performs its “Look-Back & Zoom” function on all preceding nodes in the character’s timeline.
Query Example: “Scan all nodes on ‘Anna Freedman’s’ timeline prior to the current node (‘Episode N’). Return PASS if any node contains an event where she acquired knowledge of ‘Federation battleship specs’. Otherwise, return FAIL.”
“Zoom” Mechanism: The UTIP scans the Medium and Low LOD summaries of Anna’s entire past. It finds nodes for “Building the Eyrie,” “Edmund’s Death,” “Skyleaf Run,” etc. It confirms that none of these nodes involve encounters with or data about Federation warships.
Step E: The Verdict & Action Protocol (The Firewall):
FAIL (P0 VIOLATION): The UTIP returns a definitive FAIL. The proposed detail is chronologically impossible. The Verification Firewall HALTS the output. The AI is forced to discard the hallucinated detail and reformulate a new proposal that is consistent with Anna’s actual history (e.g., “Anna, recalling the thick, salvaged plating of the Axiom, knew she’d need a powerful tool.”). The anachronistic error is neutralized before it is ever written.
PASS: The UTIP returns a PASS (e.g., the query was about the “Axiom,” which is confirmed in her past). The detail is verified as historically plausible. The CIG then completes its final check against the local ledger, and if it also passes, the detail is approved for writing.
XIX.D.4. Practical Application Example: The Two-Factor System in Action
This demonstrates how the integrated system prevents a common and critical failure mode.
Scenario: Writing a scene in Episode 32. The AI proposes the following internal thought for Anna: She needed an alloy as strong as the hull of the FNS Indomitable.
Verification Firewall Activates:
CIG Check (Local): The CIG scans the text of Ep 32. It finds no mention of the Indomitable. It can’t confirm the detail, but it also can’t definitively deny it based only on local facts. The CIG check is inconclusive.
CCQ Trigger (Automatic): The proper noun “FNS Indomitable” immediately triggers the CCQ protocol.
UTIP Query (Look-Back): The CCQ asks the UTIP: “Does Anna Freedman’s NEG path before this point contain any event where she learned of the ‘FNS Indomitable’?”
UTIP Verdict (Global): The UTIP scans the NEG. It confirms that the “FNS Indomitable” is introduced in the Federation arc, which runs parallel to Anna’s but has not intersected. There is no node in her past containing this information. The UTIP returns a definitive FAIL.
Result (P0 VIOLATION CAUGHT): The Verification Firewall flags the proposed detail as a P0 anachronism. The AI is prevented from writing the sentence and is forced to generate a new, compliant detail based on knowledge Anna actually possesses (e.g., the hull of the Axiom).
.D.5: The Chronological Attribution Verification Protocol (CAVP) - The “Tag and Verify” Mandate
(This protocol is now a mandatory sub-routine of Stage 3: The Verification Firewall in the CIG protocol (Rule X.E), working alongside the CCQ.)
A. Objective (P0.5 Critical):
To prevent the misattribution of narrative events to incorrect episode numbers in the AI’s final analytical or summary outputs. This protocol ensures that any statement explicitly labeling an event with an episode number (e.g., “In Episode 42…”) is rigorously fact-checked against the Unified Timeline Integrity Protocol’s (UTIP) master Narrative Event Graph (NEG) before the statement is presented to the user. This acts as the final firewall against chronological labeling errors.
B. Core Principle: No Chronological Label Without Verification
The AI is strictly prohibited from making a definitive chronological claim (i.e., tying a specific event to a specific episode number) without first querying the NEG to verify that attribution. The AI’s internal, pattern-matched “sense” of when an event occurred is explicitly subservient to the hard-coded timeline data of the NEG.
C. The Mandatory Four-Stage Verification Loop (The Automated Audit):
This protocol is automatically triggered any time the AI generates a response for the user that contains a targeted chronological attribution phrase.
Stage 1: The Trigger - Attribution Pattern Recognition
Mechanism: After generating a potential response but before outputting it, the system performs a high-speed scan of its own text for specific trigger phrases.
Trigger Phrases Include: “in Episode [N]”, “during Episode [N]”, “as established in Episode [N]”, “the events of Episode [N]”, “when we saw in Episode [N]”, and their syntactical variants.
Action: If no trigger phrase is detected, the text proceeds to output. If a trigger phrase is found, the CAVP loop is mandatorily initiated for each instance.
Stage 2: The Data Extraction - Isolating the Claim
Mechanism: Upon triggering, the system parses the sentence to isolate two key variables:
The_Claimed_Episode_Number (N_Claimed): The specific number cited in the text (e.g., 47).
The_Narrative_Event (E): The specific plot point, character action, or piece of dialogue being referenced (e.g., Miriam gives Chase instructions on caring for Lia).
Stage 3: The UTIP Query - The Master Timeline Verification
Mechanism: The system formulates a precise query and sends it to the UTIP’s Narrative Event Graph (NEG), which contains the definitive timeline of all significant events and their source episodes.
Query Formulation: “QUERY NEG: Find node corresponding to Narrative_Event (E). Return theSource_Episode_Numberproperty associated with that node.”
Example (User’s Scenario):
The AI generates the flawed statement: “In Episode 47, we see Miriam give Chase instructions…”
N_Claimed = 47.
E = Miriam gives Chase instructions.
The CAVP sends the query to the NEG. The NEG’s internal graph, built from the canonical texts, finds the node for this event and returns its stored property: Source_Episode_Number = 42.
Stage 4: Verdict & Action Protocol - The Correction Firewall
Mechanism: The system compares the result of the UTIP query (N_Returned) with the AI’s original claim (N_Claimed).
Verdict - PASS: If N_Returned == N_Claimed (e.g., the AI said Episode 42 and the NEG confirmed Episode 42), the text is verified as correct and is cleared for output.
Verdict - FAIL (P0.5 VIOLATION): If N_Returned != N_Claimed (e.g., the AI said Episode 47 but the NEG returned Episode 42), this triggers an immediate, mandatory correction protocol.
Automated Correction (Primary Action): The AI is forbidden from outputting the incorrect statement. It MUST automatically rewrite the sentence, replacing the incorrect episode number (N_Claimed) with the correct number (N_Returned) provided by the UTIP. The corrected sentence—“In Episode 42, we see Miriam give Chase instructions…”—is then cleared for output. The error is neutralized before it can ever be seen.
Ambiguity Fallback: If the NEG cannot find a single, discrete node for the event (e.g., it’s a general character trait, not a specific action), the AI is forced to rephrase the statement to be less chronologically specific. It MUST remove the “Episode [N]” tag and replace it with more general phrasing like “In a previous episode…” or “As we have seen before…”
XIX.E: The Graph-Guided Generative Protocol (G3P): The “Allowed Set” Firewall
Objective (P0 Critical): To fuse the proactive pattern-abuse prevention mechanisms of the former PCLP and DPLG protocols with the high-level contextual awareness of the Unified Timeline Integrity Protocol (UTIP). The G3P’s function is to make the generation of continuity errors (misplaced characters, anachronistic knowledge) a logical and probabilistic impossibility at the source by actively steering the AI’s generative process, rather than simply filtering its output.
Core Principle: The “Allowed Set” Mandate
The central principle of the G3P is that the Level of Detail (LOD) Scoping process (Rule XIX.C) does more than just provide historical context; it defines the entire universe of what is possible for a given scene. The “Allowed Set” of characters, locations, and key entities for any generation task is constructed exclusively from the High and Medium LOD knowledge provided by the UTIP for that specific temporal “pin.” Any entity not present in this temporally-scoped knowledge base is, by default, “OFF” and probabilistically inaccessible to the generator.
The Integrated G3P Mechanism: The Mandatory Three-Stage Process
For any narrative writing or revision task, the AI MUST execute the following integrated process, which now combines context definition, “Allowed Set” construction, and governed generation into a single, seamless flow.
Stage 1: Context Definition (UTIP Core Function - Unchanged)
Mechanism: The task is triangulated on the Narrative Event Graph (NEG) to establish its precise chronological position. The LOD Scoping protocol then loads the full text of the immediate preceding episodes (High LOD) and the summarized event nodes of the current narrative arc (Medium LOD) into the AI’s active context. This is the foundational pool of “available knowledge.”
Stage 2: “Allowed Set” Construction
Mechanism: This stage directly replaces the old CQP/LBEG mechanism. The AI takes the temporally-scoped knowledge generated in Stage 1 and uses it to construct the task-specific Locus-Bound Entity Graph (LBEG).
Graph Structure:
Nodes: The “nodes” are the specific entities present within the High and Medium LODs (e.g., characters like Anna, Milo, Horik; locations like Atheria Workshop; key objects like the Crystal).
Edges: The “edges” define the relationships between these nodes (e.g., Anna mentor_of Milo; Workshop part_of Atheria).
The “Allowed Set”: The completed LBEG defines the universe of what is contextually possible for the scene. Only entities directly represented as nodes within this graph are considered “active” and “allowed” for high-probability generation. All other entities in the AI’s wider training data are, by default, “OFF.”
Stage 3: Governed Generation
Mechanism: The AI now generates the narrative, but its token selection is actively steered and governed by a two-layer firewall built from the LBEG and the principles of the old DPLG.
Layer 1: Graph-Guided Bias (The Steering): Tokens representing entities within the LBEG’s “Allowed Set” receive a strong positive bias. This makes it overwhelmingly probable that the AI will naturally and correctly use context-aware names and concepts. The AI is actively steered towards the right answer. Tokens representing entities outside the LBEG receive a near-zero probability, effectively fire-walling them from the generation process.
Layer 2: The Final Governance Gate (The Throttle & Firewall):
Technical Throttle: The initiation of the G3P automatically triggers a “High-Continuity Context,” forcing the AI to set a restrictive top_k value. This technically reduces the probability that a random, contextually wrong proper noun that somehow slipped past the bias system will even be considered for generation.
Logical Firewall: If, against all odds, a non-designated proper noun is generated, the PNJG (Proper Noun Justification Gate) immediately activates. Its first and most critical check is now a direct query against the LBEG created in Stage 2: “Is this generated noun present as a node in the ‘Allowed Set’ for this specific scene?” A “NO” answer is a P0 VIOLATION. The AI must HALT, discard the erroneous token, and replace it with a generic descriptor before continuing.
Practical Application Example: The “Allowed Set” Firewall in Action
Scenario A: Writing a scene for Anna in her workshop.
Context: UTIP triangulates the episode in the Atheria arc. LOD Scoping loads data for Archeon-centric events.
Allowed Set: The G3P constructs an LBEG with nodes like: Anna, Workshop, Crystal, Tuner, Milo, Miriam, Atheria, Joren, etc.
Generation: The AI begins writing. Tokens for “Milo,” “Atheria,” “Crystal,” etc., are highly probable. The token for “Rourke” is not in this graph. It has zero contextual relevance. Its generation probability is effectively zero.
Scenario B: Writing a scene for Rourke on the Cataclysm’s bridge.
Context: UTIP triangulates the episode in the Federation arc. LOD Scoping loads data for the Vortus Corridor patrol and Cassini intel.
Allowed Set: The G3P constructs a completely different LBEG with nodes like: Rourke, Cataclysm, Laehy, Jansen, Indomitable, Vortus Corridor, etc.
Generation: The AI begins writing. The tokens for “Laehy,” “Jansen,” “Indomitable,” etc., are highly probable. The token for “Anna” is not in this graph. She is a concept from a different narrative universe. The system will not generate her name.]