In short, agents submit short memories through an MCP server when they hit something the current context didn't cover. Then, a scheduled "dream" consolidates each batch into long-term memory and updates the AGENTS.md and skills from it. The produced skills can be pushed to git and fed back to the agents, essentially enabling the autonomous self-evolution loop
Main features:
- Works with any coding agent that supports MCP and skills.
- Works across the team: submissions pool across everyone's sessions and aggregated into a team-level context.
- Output is plain AGENTS.md plus skills, so it is easy to version, review, edit.
- Extensible by design. Every component (short term memory, long term memory, dreamer model, hooks, etc) is a Python Protocol and is wired from a runtime config, so virtually anything can be swapped/extended.
Looking for your feedback!Github repo: https://github.com/luml-ai/dreamer
Blogpost: https://luml.ai/blog/2026/dreamer-self-evolving-agents