Proposal: A “Control Plane” for LLM Apps — Default Kernel + Living Context Layer (seeking architectural feedback)

Most LLM apps still treat context as an implicit side effect of a text box.
The result is silent assumption drift, user intent decay, and confident-but-wrong output.

I’m prototyping an alternative pattern that treats context as a first‑class system object, with a clean separation between control plane and inference plane.

Core idea (high level)

Instead of re‑prompting identity, constraints, and goals every session, the system maintains a persistent Context Contract Stack, governed by a Default Kernel, and rendered via a Living Description Layer that is always visible to the user.

Think of it like a control plane for LLM behavior.

Key components

1) Default Kernel (hard constraints)
A decoupled kernel that enforces invariant behavior regardless of the base model:

  • truth prioritized over tone or comfort

  • explicit refusal or clarification when inputs are missing or decayed

  • no confident output derived from unvalidated assumptions

This is not “safety policy” — it’s runtime correctness gating.

2) Context Contract Stack (persistent state)
User constraints, goals, modes, and invariants are stored as first‑class objects (not hidden in prompt text).
They persist across sessions, can be toggled or edited, and must be cited by the model when reasoning.

Examples:

  • active goal

  • operating mode (analysis vs execution)

  • risk tolerance

  • truth‑over‑comfort flag

  • domain constraints

3) Living Description Layer (observable state)
An always‑visible, auto‑updating summary of what the system believes is currently true and active.
If the model’s behavior changes, the description changes with it.

No invisible context. No guessing what the model “thinks” the user wants.

4) Workflow primitives as system objects
The system treats cognition as a lifecycle, not a chat stream:

  • end‑of‑session “commit array” (what was decided, what remains open)

  • an “Unfinished Thoughts” store with explicit status (seed / provisional / quarantined)

  • context/box extraction from conversations (“box search”)

These are not UX flourishes — they are part of the state model.


Why I think this matters

  • It directly addresses the biggest failure mode in production LLM tools: context collapse and assumption drift

  • It reframes governance as mechanizable constraints, not post‑hoc policy

  • It creates a debuggable, auditable surface for LLM behavior

  • It scales better than prompt templates as users and use‑cases grow


Architecture questions I’d love input on

  1. State model
    Would you represent the Context Contract Stack as:
  • event‑sourced log

  • CRDT‑style document

  • typed schema with explicit precedence rules

  • something else?

  1. Conflict resolution
    When multiple context elements disagree, what’s the cleanest resolution model:
  • priority graph

  • constraint solver

  • kernel‑level veto with explanation?

  1. Minimal viable UI
    What’s the least intrusive way to keep the Living Description Layer visible without turning it into “settings hell”?

  2. Existing art
    Are there frameworks or papers that already treat LLM context as a control plane rather than a prompt artifact?


I’m not pitching a product — I’m trying to pressure‑test an architectural pattern.
If you’ve built agent frameworks, complex stateful UIs, or policy‑driven systems, I’d really value critique.

Happy to clarify details or share a more formal spec if useful.