I've been using Cursor heavily for building internal tools, and the biggest productivity gains came from investing time in detailed skills files that explain the codebase architecture, naming conventions, and common patterns.
Without something like LNAI, the natural tendency is to just use the default configs and wonder why the AI keeps suggesting solutions that don't match your existing patterns. The act of writing "here's how we structure components" once, and having that context available everywhere, changes the quality of suggestions dramatically.
The debate about whether unified prompts are better or worse than tool-specific ones misses this point. The 80/20 is getting the project context documented at all. Whether you tweak prompts per model is optimization at the margins compared to having zero documented context.
One thing I'd add: skills work best when they're specific about what NOT to do. "Don't create new utility files, add to existing ones" or "Don't use class components" saves more correction time than positive instructions.
Then for the application specific documentation, I'd understand you'd want to share it, as it stays the same for all agents touching the same codebase. But easily solved by putting it in DESIGN.md or whatever and appending "Remember to check against DESIGN.md before changing the architecture" or similar.
However given how many tools there are and how fast each tool moves, I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.
Most prompts I do I execute in all four at the same time, and literally compare the git diff from their work, so I understand totally :) But even for comparison, I think using the same identical config for all, you're not actually seeing and understanding the difference because again, they need different system prompts. By using the same when you compare, you're not accurately seeing the best of each model.
I ran into this building a spec/skill sync system [1] - the "sync once" model breaks down when you need to track whether downstream consumers are aware of upstream changes.
[1] https://github.com/anupamchugh/shadowbookFor transformed files (Cursor's .mdc frontmatter, GEMINI.md sub-directory rules), you re-run lnai sync. LNAI maintains a manifest tracking of every generated file with content hashes, so it knows what changed and cleans up orphans automatically.
So it's not really "sync once", it's "symlink for instant propagation, regenerate-on-demand for transforms." The manifest ensures LNAI always knows its downstream state.
This system can also break down if you create new skills/rules in the specific tool directories (.claude, .codex, etc.) but that is against LNAI's philosophy. If you need per-tool overrides you put them in `.ai/.{claude/codex/etc.}` sub-directories and LNAI manages them for you.
But ... as the author and maintainer of Ruler I can tell you that I don't use it and I don't recommend using it (or this new tool).
In almost all cases it isn't necessary anymore - most agents support AGENTS.md (or at least a hack like `@AGENTS.md` in CLAUDE.md), and Agent Skills are the best way to customise agents and are available everywhere now.
There are some corner cases where using a tool like Ruler may still make sense, but if in doubt, you probably don't need it.
I would really like all AI agents coding tools to have the same config formats, but I feel like we are not there yet :/
.ai/ ├── AGENTS.md ├── rules/ ├── skills/ └── settings.json # MCP servers, permissions
Run `lnai sync` and it exports to native formats for 7 tools: Claude Code, Cursor, GitHub Copilot, Gemini CLI, OpenCode, Windsurf, and Codex. The interesting part is it's not just copying files. Each tool has quirks:
- Cursor wants `.mdc` files with `globs` arrays in frontmatter - Gemini reads rules at the directory level, so rules get grouped - Permissions like `Bash(git:*)` become `Shell(git)` for Cursor - Some tools don't support certain features (e.g., Cursor has no "ask" permission level). LNAI warns but doesn't fail
Where possible, it uses symlinks. So `.claude/CLAUDE.md` → `../.ai/AGENTS.md`. Edit the source, all tools see the change immediately without re-syncing.
Usage:
npm install -g lnai lnai init # Creates .ai/ directory lnai validate # Checks for config errors lnai sync # Exports to all enabled tools
It's MIT licensed. The code is TypeScript with a plugin architecture, each tool is a plugin that implements import/export/validate. GitHub: https://github.com/KrystianJonca/lnai Docs: https://lnai.sh
Would appreciate feedback, especially from anyone else dealing with this config hell problem.