Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP
58 points
4 hours ago
| 17 comments
| github.com
| HN
Every MCP server injects its full tool schemas into context on every turn — 30 tools costs ~3,600 tokens/turn whether the model uses them or not. Over 25 turns with 120 tools, that's 362,000 tokens just for schemas.

mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand:

    mcp2cli --mcp https://mcp.example.com/sse --list             # ~16 tokens/tool
    mcp2cli --mcp https://mcp.example.com/sse create-task --help  # ~120 tokens, once
    mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"
No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface.

Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns.

It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli`

Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub.

https://github.com/knowsuchagency/mcp2cli

jancurn
39 minutes ago
[-]
Cool, adding this to my list of MCP CLIs:

  - https://github.com/apify/mcpc
  - https://github.com/chrishayuk/mcp-cli
  - https://github.com/wong2/mcp-cli
  - https://github.com/f/mcptools
  - https://github.com/adhikasp/mcp-client-cli
  - https://github.com/thellimist/clihub
  - https://github.com/EstebanForge/mcp-cli-ent
  - https://github.com/knowsuchagency/mcp2cli
  - https://github.com/philschmid/mcp-cli
  - https://github.com/steipete/mcporter
  - https://github.com/mattzcarey/cloudflare-mcp
  - https://github.com/assimelha/cmcp
reply
oulu2006
2 minutes ago
[-]
Precisely, there are about 100 of these, and everyone makes a new one every week.
reply
Doublon
1 hour ago
[-]
We had `curl`, HTTP and OpenAPI specs, but we created MCP. Now we're wrapping MCP into CLIs...
reply
Charon77
8 minutes ago
[-]
MCP only exists because there's no easy way for AI to run commands on servers.

Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...

Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP

reply
stephantul
2 hours ago
[-]
Tokens saved should not be your north star metric. You should be able to show that tool call performance is maintained while consuming fewer tokens. I have no idea whether that is the case here.

As an aside: this is a cool idea but the prose in the readme and the above post seem to be fully generated, so who knows whether it is actually true.

reply
hrmtst93837
32 minutes ago
[-]
Token counts alone tell you nothing about correctness, latency, or developer ergonomics. Run a deterministic test suite that exercises representative MCP calls against both native MCP and mcp2cli while recording token usage, wall time, error rate, and output fidelity.

Measure fidelity with exact diffs and embedding similarity, and include streaming behavior, schema-change resilience, and rate-limit fallbacks in the cases you care about. Check the repo for a runnable benchmark, archived fixtures captured with vcrpy or WireMock, and a clear test harness that reproduces the claimed 96 to 99 percent savings.

reply
benvan
1 hour ago
[-]
Nice project! I've been working on something very similar here https://github.com/max-hq/max

It works by schematising the upstream and making data locally synchronised + a common query language, so the longer term goals are more about avoiding API limits / escaping the confines of the MCP query feature set - i.e. token savings on reading data itself (in many cases, savings can be upwards of thousands of times fewer tokens)

Looking forward to trying this out!

reply
tern
49 minutes ago
[-]
There are a handful of these. I've been using this one: https://github.com/smart-mcp-proxy/mcpproxy-go
reply
DieErde
1 hour ago
[-]
Why is the concept of "MCP" needed at all? Wouldn't a single tool - web access - be enough? Then you can prompt:

    Tell me the hottest day in Paris in the
    coming 7 days. You can find useful tools
    at www.weatherforadventurers.com/tools
And then the tools url can simply return a list of urls in plain text like

    /tool/forecast?city=berlin&day=2026-03-09 (Returns highest temp and rain probability for the given day in the given city)
Which return the data in plain text.

What additional benefits does MCP bring to the table?

reply
SyneRyder
1 hour ago
[-]
A few things: in this case, you have to provide the tool list in your prompt for the AI to know it exists. But you probably want the AI agent to be able to act and choose tools without you micromanaging and reminding it in every prompt, so then you'd need a tool list... and then you're back to providing the tool list automatically ala MCP again.

MCP can provide validation & verification of the request before making the API call. Giving the model a /tool/forecast URL doesn't prevent the model from deciding to instead explore what other tools might be available on the remote server instead, like deciding to try running /tool/imagegenerator or /tool/globalthermonuclearwar. MCP can gatekeep what the AI does, check that parameters are valid, etc.

Also, MCP can be used to do local computation, work with local files etc, things that web access wouldn't give you. CLI will work for some of those use cases too, but there is a maximum command line length limit, so you might struggle to write more than 8kB to a file when using the command line, for example. It can be easier to get MCP to work with binary files as well.

I tend to think of local MCP servers like DLLs, except the function calls are over stdio and use tons of wasteful JSON instead of being a direct C-function call. But thinking of where you might use a DLL and where you might call out to a CLI can be a useful way of thinking about the difference.

reply
Phlogistique
1 hour ago
[-]
The point is authorization. With full web access, your agent can reach anything and leak anything.

You could restrict where it can go with domain allowlists but that has insufficient granularity. The same URL can serve a legitimate request or exfiltrate data depending on what's in the headers or payload: see https://embracethered.com/blog/posts/2025/claude-abusing-net...

So you need to restrict not only where the agent can reach, but what operations it can perform, with the host controlling credentials and parameters. That brings us to an MCP-like solution.

reply
rvz
57 minutes ago
[-]
But this is no different to using an API key with access controls and curl and you get the same thing.

MCP is just as worse version of the above allowing lots of data exfiltration and manipulation by the LLM.

reply
ewidar
1 hour ago
[-]
One thing that I currently find useful on MCPs is granular access control.

Not all services provide good token definition or access control, and often have API Key + CLI combo which can be quite dangerous in some cases.

With an MCP even these bad interfaces can be fixed up on my side.

reply
iddan
1 hour ago
[-]
The prophecy of the hypermedia web
reply
Traubenfuchs
1 hour ago
[-]
I feel like I haven’t read anything about this in combination with mcp and like I am taking crazy pills: does no one remember hateoas?
reply
jbverschoor
1 hour ago
[-]
Proxying / gatekeeping
reply
Intermernet
35 minutes ago
[-]
I may be showing my ignorance here, but wouldn't the ideal situation be for the service to use the same number of tokens no matter what client sent the query?

If the service is using more tokens to produce the same output from the same query, but over a different protocol, than the service is a scam.

reply
mvc
26 minutes ago
[-]
When you're using an agent, the "query" isn't just each bit of text you enter into the agent prompt. It's the whole conversation.

But I do wonder about these tools whether they have tested that the quality of subsequent responses is the same.

reply
Intermernet
21 minutes ago
[-]
That doesn't explain why the protocol matters. Surely for equivalent responses, you need to send equivalent payloads. You shouldn't be able to hack this from the client side.
reply
nwyin
2 hours ago
[-]
cool!

anthropic mentions MCPs eating up context and solutions here: https://www.anthropic.com/engineering/code-execution-with-mc...

I built one specifically for Cognition's DeepWiki (https://crates.io/crates/dw2md) -- but it's rather narrow. Something more general like this clearly has more utility.

reply
ejoubaud
1 hour ago
[-]
How does this differ from mcporter? https://github.com/steipete/mcporter/
reply
jofzar
1 hour ago
[-]
How is this the 5th one of these I have seen this week, is everyone just trying to make the same thing?
reply
hnlmorg
57 minutes ago
[-]
Basically yes.
reply
philipp-gayret
2 hours ago
[-]
Someone had to do it. mcp in bash would make them composable, which I think is the strongest benefit for high capability agents like Claude, Cursor and the like, who can write Bash better than I. Haven't gotten into MCP since early release because of the issues you named. Nice work!
reply
silverwind
1 hour ago
[-]
How would the LLM exactly discover such unknown CLI commands?
reply
Mashimo
1 hour ago
[-]
Skills or tell it the --list command would be my guess.
reply
jkisiel
1 hour ago
[-]
How is it different from 'mcporter', already included in eg. openclaw?
reply
Ozzie_osman
1 hour ago
[-]
I kind of feel like it might be better to go from CLI to MCP.
reply
tuananh
1 hour ago
[-]
mcp just need to add dynamic tools discovery and lazy load them, that would solve this token problem right?
reply
rvz
1 hour ago
[-]
MCP itself is a flawed standard to being with as I said before [0] and its wraps around an API from the start.

You might as well directly create a CLI tool that works with the AI agents which does an API call to the service anyway.

[0] https://news.ycombinator.com/item?id=44479406

reply
liminal-dev
2 hours ago
[-]
This post and the project README are obviously generated slop, which personally makes me completely skip the project altogether, even if it works.

If you want humans to spend time reading your prose, then spend time actually writing it.

reply