While scanning the space, I keep seeing devs (us included, initially) get obsessed with complex "memory" layers and graph-based reflection. In practice, we found they mostly lead to context poisoning and high latency.
We pivoted to a "Barbell Strategy": Crisp, lean inter-agent instructions paired with massive, localized "artifact" context for sub-agents that are immediately killed after the task.
I’m curious—for those of you building agents in production:
Have you found a way to make "long-term memory" actually reliable, or are you also moving toward ephemeral, specialized agents?
What’s the "boring" plumbing problem (Auth, state rollback, etc.) that took you way longer to solve than the actual AI logic?