Show HN: I gave my robot physical memory – it stopped repeating mistakes
18 points
1 day ago
| 3 comments
| github.com
| HN
RovaAI
17 hours ago
[-]
The deduplication/state-memory pattern maps well to any long-running agent. What I've found works: instead of a complex memory system, a simple append-only log of processed items with a last_seen timestamp is often enough. Lookup is fast with a sorted structure, and you can prune entries older than your recurrence window.

The hard part isn't storage — it's deciding what counts as "the same" item. For web research agents, URL identity isn't sufficient (pages change, same story, different URL). Content fingerprinting on normalized text (first N chars after stripping whitespace/HTML) turns out to be more reliable than URL equality.

Also worth noting: the failure mode you described (repeating mistakes) often comes from agents not distinguishing between "I haven't seen this" and "I saw this and it failed." Storing outcome alongside identity — even just success/failure — changes the behavior significantly. Retry logic becomes explicit instead of accidental.

reply
sankalpnarula
11 hours ago
[-]
Hey just curious. What happens when the memory gets large enough. Does it start creating problems with context windows?
reply
DANmode
1 day ago
[-]
Recommend providing a text summary of the comparison chart - and talking a bit about the API.
reply
DANmode
1 day ago
[-]
I’m going to say this has failed the Turing test based on the reply.
reply