Show HN: Continual Learning with .md
21 points
6 hours ago
| 4 comments
| github.com
| HN
I have a proposal that addresses long-term memory problems for LLMs when new data arrives continuously (cheaply!). The program involves no code, but two Markdown files.

For retrieval, there is a semantic filesystem that makes it easy for LLMs to search using shell commands.

It is currently a scrappy v1, but it works better than anything I have tried.

Curious for any feedback!

alexbike
3 hours ago
[-]
The markdown approach has a real advantage people underestimate: you can read and edit the memory yourself. With vector DBs and embeddings the memory becomes opaque — you can't inspect or correct what the model "knows". Plain files keep the human in the loop.

The hard part is usually knowing what +not+ to write down. Every system I've seen eventually drowns in low-signal entries.

reply
in-silico
3 hours ago
[-]
This assumes that the model's behavior and memories are faithful to their english/human language representation, and don't stray into (even subtle) "neuralese".
reply
verdverm
3 hours ago
[-]
Is there anything (besides plumbing) that prevents both? i.e. when the file is edited, all the representations are updated
reply
dhruv3006
1 hour ago
[-]
I love how you approached this with markdown !

I guess the markdown approach really has a advantage over others.

PS : Something I built on markdown : https://voiden.md/

reply
namanyayg
4 hours ago
[-]
I've seen a lot of such systems come and go. One of my friends is working on probably the best (VC-funded) memory system right now.

The problem always is that when there are too many memories, the context gets overloaded and the AI starts ignoring the system prompt.

Definitely not a solved problem, and there need to be benchmarks to evaluate these solutions. Benchmarks themselves can be easily gamed and not universally applicable.

reply
natpalmer1776
1 hour ago
[-]
The armchair ML engineer in me says our current context management approach is the issue. With a proper memory management system wired up to it’s own LLM-driven orchestrator, memories should be pulled in and pushed out between prompts, and ideally, in the middle of a “thinking” cycle. You can enhance this to be performant using vector databases and such but the core principle remains the same and is oft repeated by parents across the world: “Clean up your toys before you pull a new one out!”

Also since I thought for another 30 seconds, the “too many memories!” Problem imo is the same problem as context management and compaction and requires the same approach: more AI telling AI what AI should be thinking about. De-rank “memories” in the context manager as irrelevant and don’t pass them to the outer context. If a memory is de-ranked often and not used enough it gets purged.

reply
dummydummy1234
1 hour ago
[-]
Mid thinking cycle seems dangerous as it will probably kill caching.
reply
natpalmer1776
1 hour ago
[-]
The mid thinking cycle would require significant architecture change to current state of art and imo is a key blocker to AGI
reply
xwowsersx
1 hour ago
[-]
What is the memory system you are referring to? I've been trying Memori with OpenClaw. Haven't had a ton of time to really kick the tires on it, so the jury's still out.
reply
sudb
5 hours ago
[-]
I really like the simplicity of this! What's retrieval performance and speed like?
reply