A set of speculative ideas on treating LLM agents as interpreters with limited working memory, using externalized structures for reliable long-term project maintenance.

Hi everyone,

I wanted to share a set of ideas I’ve been working on—not as a finished project, but as a direction for thinking about LLM agents differently.

The core view is this: an LLM-powered agent should be treated as an interpreter with a tiny working memory (the context window), not just a prompt-driven chatbot. Once we accept that the agent is fundamentally forgetful, the real design challenge becomes: What external structures do we need to make long-lived, reliable work possible?

The repository linked below is a collection of design notes and thought experiments exploring that question. It covers:

  • The Forgetful Society – a thought experiment on how progress is possible under extreme memory limits, and what that implies for agent cooperation.
  • Issue Tree – an addressable tree structure (like /0/1/0) for conversation history, with epistemic node types derived from Shannon’s communication model.
  • Square Root Boundary – a heuristic for detecting when context compression has become necessarily lossy, or when a conversation is structurally redundant.
  • Lazy Tool Evaluation – using placeholders (@lazy{{...}}) to keep tool output out of the context window until needed, inspired by Unix pipes.
  • Script as Native Extension – replacing ad‑hoc shell commands with a library of versioned, idempotent scripts that grow over time.

None of this is validated by experiments; it’s purely speculative design work. The repo is just a set of Markdown files—more like a public notebook than a library.

I’m posting this here because the LangChain community has been thinking deeply about agent architecture, memory, and tool use. I’d be very interested to hear where these ideas overlap with what others are building, or where they completely miss the mark.

Thanks for reading.

Repository: GitHub - D7x7z49/llm-context-idea: Research notes on maintaining and reasoning about LLM context windows across long-lived projects. Speculative ideas, heuristic boundaries, and design sketches without empirical validation. · GitHub