Show HN: Chippery, an OpenCode fork that (often) uses 20-40% fewer tokens
1 points
by pell
1 hour ago
| 0 comments
| chippery.ai
| HN
I kept hitting token limits with Claude Code on larger codebases and ended up building Chippery (a fork of OpenCode) to reduce context size outside the model.

It uses a symbolic index, navigation layer, semantic and Pagerank-like ranking and some context reduction / compression techniques to avoid resending and rereading the same files and lookups.

I ran benchmarks mostly with Anthropic’s models, and saw roughly 20–40% token reduction depending on workflow on average, in some cases quite a bit beyond, sometimes less.

There’s also a Claude Code hook which offers access to the tools, but it's still a bit clunky.

It’s fully open-source, with an optional paid Pro / lifetime tier for support.

No one has commented on this post.