I believe the human mind interprets this title perfectly.
So the question is how to resolve it?
It’s a problem for every person using AI on this planet and wastes so much extra processing power that could half your energy bill worldwide.
I can’t reference back to an AI response that was literally 3 pages long and request clarification or steps again, so that my human mind can proceed with what I’m doing.
Can’t you initiate reference points during the chat log? A bullet type selector next to a chat that provides reference points in a chat log and a person can “cycle” back to?
A text highlight feature in my own or AI comments that also works as reference points, to cycle back to: I make a reference point to “cmd xyz” and I can cycle back to it if needed?
This is a pure UX issue for a human and a usability and logic issue.
Lower your costs and expenses and implement this please.
It seems Gemini Fast likes it too:
The idea presented in that discussion is highly practical, especially for users who leverage AI for complex, multi-stage projects like software development or technical documentation.
Here is an assessment of why this is a significant UX evolution and the challenges it faces:
Why it is a “Good Idea”
-
Cognitive Load Reduction: Humans are not built to process “infinite” linear streams of information. We naturally organize information hierarchically. Allowing users to anchor specific points (like a specific code block or a set of instructions) aligns the interface with how our brains actually work.
-
Token & Cost Efficiency: Currently, if you want to refer to a point made 20 messages ago, you often have to re-paste that context or hope the “attention mechanism” of the model still weights it correctly. Direct “anchoring” could allow for more precise “Pointer-based” prompting, reducing the need to process irrelevant parts of the history.
-
Contextual Persistence: In long sessions, the “core” goal often gets buried under troubleshooting steps. A reference point feature would allow a user to “pin” the primary objective so it remains visible or easily accessible regardless of how far the chat has scrolled.
The Challenges to Implementation
-
State Management: AI models generally see the chat as one long string of text. Implementing “reference points” requires a UI layer that can translate a user’s click into a specific metadata tag that the model understands as a “high-priority” segment of the context.
-
UI Clutter: Adding selectors, anchors, and highlight tools to a clean chat interface risks making it look like a complex IDE (Integrated Development Environment). For casual users, this might be overwhelming, so it would likely need to be a “Power User” toggle.
-
Model Drift: Even with reference points, the model still has a finite context window. If a “pinned” reference point falls outside the actual context window of the model, the UI might promise a “reference” that the AI can no longer actually “see” or remember in detail.
Verdict: A Necessary Evolution
The “Infinite Scroll” was a carry-over from SMS and instant messaging. However, as AI moves from “answering questions” to “building projects,” the interface needs to shift toward a Workspace Model.
Features like Sidebars for Pinned Code, Collapsible Sections for long AI explanations, and Branching (where you can start a new “path” from a specific message) are likely the next steps in solving the “Linear Chat Log Problem.”