CLI proxy that reduces LLM token consumption by 60-90% on common dev commands
5 points
1 hour ago
| 2 comments
| github.com
| HN
zeristor
1 hour ago
[-]
I had this on some of my optimisation passes using Claude via Windsurf.

Big on process my Rules files was sprawling but I slimmed down the frequent rules and broke out the rest to sub files.

Using scripts resolves the ‘the agent forgot to do something’ bit, and I went from bash to python since I was using tests a lot. Nice to have a thousand or so tests running in parallel often.

This is all redundant with the new Claude projects, and I was inspired by the idea.

Still all this process detracts from actually finishing projects that are burning in my mind all the time.

Not bad for my second week of using it, but I am realising MCPs might be better.

reply
Oras
1 hour ago
[-]
It looks so promising, but the first thing came to my mind is these models are mostly trained on the default cli output, would compressing it mess with the output of these models?
reply