Everything the author says about memory management tracks with my intuition of how CC works, including my perception that it isn't very good at explicitly managing its own memory.
My next step in trying to get it to work well on a bigger game would be to try to build a more "intuitive" memory tool, where the textual description of a room or an item would automatically RAG previous interactions with that entity into context.
That also is closer to how human memory works -- we're instantly reminded of things via a glimpse, a sound, a smell... we don't need to (analogously) write in or search our notebook for basic info we already know about the world.
Context is intuitively important, but people rarely put themselves in the LLM's shoes.
What would be eye-opening would be to create an LLM test system that periodically sends a turn to a human instead of the model. Would you do better than the LLM? What tools would you call at that moment, given only that context and no other knowledge? The way many of these systems are constructed, I'd wager it would be difficult for a human.
The agent can't decide what is safe to delete from memory because it's a sort of bystander at that moment. Someone else made the list it received, and someone else will get the list it writes. The logic that went into why the notes exist is lost. LLMs are living the Christopher Nolan film Memento.
Ah, Anchorhead! One of the most celebrated pieces of interactive fiction ever written
Edit: they are there in the repo: https://github.com/eudoxia0/claude-plays-anchorhead/tree/mas...
This behavior surprised me when I started using LLMs, since it's so counterintuitive.
Why does every interaction require submitting and processing all data in the current session up until that point? Surely there must be a way for the context to be stored server-side, and referenced and augmented by each subsequent interaction. Could this data be compressed in a way to keep the most important bits, and garbage collect everything else? Could there be different compression techniques depending on the type of conversation? Similar to the domain-specific memories and episodic memory mentioned in the article. Could "snapshots" be supported, so that the user can explore branching paths in the session history? Some of this is possible by manually managing context, but it's too cumbersome.
Why are all these relatively simple engineering problems still unsolved?
- https://platform.openai.com/docs/guides/prompt-caching
- https://platform.claude.com/docs/en/build-with-claude/prompt...
But then why is there compounding token usage in the article's trivial solution? Is it just a matter of using the cache correctly?