A three-layer memory architecture for long-running agents
1 points
1 hour ago
| 0 comments
| HN
Anthropic's recent piece on effective harnesses for long-running agents hit close to home. We've been wrestling with the same problems — agents that try to one-shot everything, declare victory prematurely, and leave chaos for the next session to clean up. But we solved some of these problems differently. Here's what's working for us, what isn't yet, and where we respectfully disagree with the proposed solutions.

The Memory Problem: Three Layers Beat One Anthropic's solution is a progress.txt file plus git history. It works, but it's flat. We use three layers instead: Layer 1: Model actualization. A semantic memory system that helps the orchestrating agent understand "what are we building and why." This is the soft layer. Layer 2: Think Jira meets Git, but for AI agents. Structured storage of tasks with metadata: blockers, decision paths, dependencies, progress state. The agent doesn't just know what to do next — it understands the logic of how we got here and where we're going. Layer 3: Git. Non-rotting charm of the classic version control. The key insight: separating "understanding" from "tracking" from "versioning" reduces cognitive load on the agent.

On Premature Victory: Prompt Engineering > Programmatic Constraints Anthropic's approach to the "agent declares victory too early" problem is a JSON file with passes: true/false flags and strongly-worded instructions not to edit it. Our approach: make the supervising agent responsible for proper task breakdown into what we call atomic structures — concrete enough to be unambiguous, but not so detailed that they micromanage the implementation. Completion criteria live in the sub-agent's prompt, not in the task definition. The sub-agent knows: tests must pass, lint must be clean, migrations applied. The supervising agent doesn't repeat this for every task — it's baked into how the sub-agent operates. Yes, this requires better prompts. But it also produces more robust behavior. The agent develops something closer to judgment rather than just following rules it's been told not to break.

The Multi-Agent Question: Minimum Viable Agents Anthropic asks whether a single general-purpose agent or multi-agent architecture works better. Our answer: use as few agents as possible. Every handoff between agents is a potential break in reasoning continuity. For small projects or clean microservice architectures: two agents. A strategic orchestrator and a coding agent. For complex systems: add a code reviewer. Three agents maximum.

What Anthropic Missed: Human-in-the-Loop as Synchronization The Anthropic piece treats human involvement as bookends — you provide the prompt, you review the result. We built in a different way: the user can intervene at any point, the sub-agent's completion report doesn't reach the orchestrator until the user validates it. This started as a bug. It became our favorite feature. Why it matters: We do not limit autonomy. It's more about synchronizing understanding between human and AI at critical checkpoints.

What We Haven't Solved Yet Honesty time: end-to-end testing is still manual for us. Lint passes, unit tests run, but visual verification happens in a separate session with a human watching. Anthropic's Puppeteer integration for browser testing is genuinely useful. We haven't automated that layer yet. It's on the roadmap, right after "pay rent" and "sleep occasionally."

The Takeaway Long-running agents are hard. Anthropic's solutions work. Ours work differently. The philosophical difference: they lean toward programmatic constraints (JSON files, explicit flags, structured formats the model "can't" edit). We lean toward better task decomposition and human checkpoints. Neither approach is wrong — different contexts, different constraints. We're sharing what worked in ours. If you're building something similar, maybe it saves you a few iterations.

No one has commented on this post.