ODAM Memory for Cursor – Long-Term Project Memory for Your AI Coding Assistant
1 points
1 hour ago
| 1 comment
| github.com
| HN
AndrewMPT
1 hour ago
[-]
Have you ever had this feeling: you already explained something to your AI assistant… but it still makes the same mistake again. And again.

Or agents that rely on RAG, but the knowledge base was uploaded once and never really follows how your code and product evolve? No dynamic updates, no memory of what actually worked, what broke production, what was refactored.

That’s exactly what we’re fixing with our ODAM-powered long-term memory for AI assistants.

Instead of a static snapshot, it builds a living, human-like memory layer over your work: • remembers what you’ve already tried and which patterns actually worked • tracks code changes and decisions over time, not just files in isolation • keeps context fresh, even when requirements, APIs, and architectures change • reduces “hallucinated confidence” by grounding answers in your real history

Early results from our internal usage: • ~80% fewer errors and misunderstandings of user intent • ~30% faster task completion • up to 60% fewer tokens consumed

For me, seeing these numbers in a real workflow is not just “nice metrics” — it’s a confirmation that AI can really learn from you over time, not just respond to a single prompt.

Most AI coding assistants still “forget” your project between prompts. That makes them feel magical in demos and frustrating in real work.

ODAM Memory for Cursor is an open-source extension that gives Cursor a real, project-scoped long-term memory layer: • hooks into Cursor’s beforeSubmitPrompt / afterAgentResponse / afterAgentThought events • stores chat + code artifacts in an external memory engine (ODAM) • injects only the most relevant facts back into .cursor/rules/odam-memory.mdc before each prompt • isolates memory per workspace via session_id and shows project-specific stats in the status bar

ODAM (Ontology Driven Agent Memory) is a stand-alone memory microservice that gives any LLM product selective, long-term memory using entity extraction, relationship graphs, embeddings, and memory guards. It’s been running in production inside our mental-health platform AI PSY HELP, which handles tens of thousands of sensitive conversations and requires stable long-term personalization plus strict safety constraints. The same memory engine now powers Cursor — think of it as a dedicated brain for your AI tools, specialized in remembering and updating context over time.

At a high level, the Cursor extension: 1. captures every chat & code interaction via official hooks 2. builds an evolving knowledge graph of your project in ODAM 3. injects only the relevant facts into Cursor before each prompt

A small Hook Event Server runs locally. Cursor calls the official hooks, tiny scripts forward events, and ODAM responds with compact, structured facts (entities, relationships, decisions, outcomes) instead of raw history. That keeps the context window lean and focused.

Under the hood, ODAM maintains episodic, semantic, procedural and project memory; a knowledge graph of services, modules, APIs, tools, issues and constraints; and an embedding index that retrieves only the most relevant facts. Memory enforcement, context-injection metrics and memory health indicators keep this long-term memory reliable.

ODAM did not start as a dev-tools project — it already powers AI PSY HELP and pilots in skills, employability and recovery programs, where tracking progress over months matters more than answering a single question.

Now the same core architecture supports code and project work inside Cursor.

GitHub: https://github.com/aipsyhelp/Cursor_ODAM https://odam.dev/

reply