Brings Claude Code's sub-agent pattern to Cursor (and other MCP-compatible tools): define task-specific "specialists" in Markdown, each running in its own isolated context window and returning a concise result to the main agent — without spawning git worktrees.
I kept hitting a failure mode common to LLM-based IDEs: the agent takes the shortest path to "done," skipping steps experienced engineers rely on — checking for similar code, following conventions, writing tests, doing a proper review pass.
The deeper issue is context. As implementation progresses, the main chat fills up with details. By the time you reach "now write tests," the model is working with a polluted or exhausted context.
Cursor 2.0's parallel agents solve a different problem: running the same task across separate worktrees and picking the best result. Great for A/B testing solutions, but it doesn't help when you want phase separation — implementation first, then testing/review with a fresh brain.
Claude Code's sub-agents pattern does exactly that. The main agent orchestrates and delegates to specialists (test writer, code reviewer), each in an isolated context window. This repo brings that pattern to Cursor via MCP.
Why this isn't just "open a new chat": - Specialists are defined as reusable Markdown files, so roles like `code-reviewer` are standardized, not ad-hoc - The main agent delegates explicitly and receives a focused report back - The main context stays clean; specialists don't inherit implementation noise - No git worktrees required
There's overhead to spawning a fresh CLI session (~5s on my M4), so I use this where quality gain outweighs cost — especially late-stage review and QA.
How do you deal with context breakdown in long sessions — split phases, reset context, or just live with it?