Show HN: Cerebrun – A unified memory for AI assistants (MCP and pgvector)
2 points
1 hour ago
| 0 comments
| cereb.run
| HN
Every AI conversation starts from scratch. I've been using OpenClaw, Windsurf, and Gemini interchangeably for months—but none of them aware what we discussed from each other. I have to tell them everything from beginning.

So I built Cerebrun, an MCP server that acts as a persistent memory layer across all of them.

What Cerebrun does:

• Cross-conversation retrieval - Automatically finds relevant snippets from past threads and injects them into current context, even across different LLMs

• 4-layer data model - Not all memories are equal. Layer 0 holds public preferences (coding style, favorite tools, language preference etc.). Layer 3 is an encrypted vault for sensitive data (API keys, credentials). Each layer has its own permission scope.

• Semantic search – pgvector under the hood. Knowledge entries are auto-embedded, so "I prefer functional programming" surfaces when you ask about error handling—no exact keyword match needed.

• Multi-LLM gateway - Works with OpenAI, Anthropic, Gemini, Ollama Cloud, OpenClaw or anything has agentic skills. Bring your own keys, no vendor lock-in.

Why this matters:

Most memory solutions are either platform-specific (ChatGPT's memory) or require you to manually paste context every time. Cerebrun sits in the middle—your AI tools query it via MCP, and it returns only what's relevant for the current conversation.

Privacy angle:

Built with Rust/Axum. Sensitive data in the vault layer is AES-256-GCM encrypted. Your credentials aren't sitting in plaintext in a database, and the memory layer runs locally—you control the data.

Current state:

MVP is functional Web Dashboard is active

Roadmap:

Chrome extension to collect data and send it to MCP server from non agentic web pages.

Site: https://cereb.run Repo: https://github.com/niyoseris/Cerebrun

No one has commented on this post.