Show HN: Model Tools Protocol (MTP) – Forget MCP, bash is all you need
7 points
3 hours ago
| 3 comments
| github.com
| HN
Recently I was trying to use an MCP server to pull data from a service, but hit a limitation: the MCP didn't expose the data I needed, even though the service's REST API supported it. So I wrote a quick CLI wrapper around the API. Worked great, except Claude Code had no structured way to know what my CLI does or how to call it. For `gh` or `curl` the model can learn from the extensive training data, but for a tool I just wrote, it was stabbing in the dark.

MCP solves this discovery problem, but it does it by rebuilding tool interaction from scratch: server processes, JSON-RPC transport, client-host handshakes. It got discovery right but threw out composability to get there. You can't pipe one MCP tool into another or run one in a cron job without a host process. Pulling a Confluence page, checking Jira for duplicates, and filing a ticket is three inference round-trips for work that should be a bash one-liner. I also seem to endlessly get asked to re-login to my MCPs, something `gh` CLI never asks me to do.

I think the industry took a wrong turn here. We didn't need a new execution model for tools, we needed to add one capability to the execution model we already had. That's what Model Tools Protocol (MTP) is: a spec for making any CLI self-describing so LLMs can discover and use it.

MTP does that with a single convention: your CLI responds to `--mtp-describe` with a JSON schema describing its commands, args, types, and examples. No server, no transport, no handshake. I wrote SDKs for Click (Python), Commander.js (TypeScript), Cobra (Go), and Clap (Rust) that introspect the types and help strings your framework already has, so adding `--mtp-describe` to an existing CLI is a single function call.

I don't think MCP should disappear, so there's a bidirectional bridge. `mtpcli serve` exposes any `--mtp-describe` CLI as an MCP server, and `mtpcli wrap` goes the other direction, turning MCP servers into pipeable CLIs. The ~2,500 MCP servers out there become composable CLI tools you can script and run in CI without an LLM in the loop.

The real payoff is composition: your custom CLI, a third-party MCP server, and jq in a single pipeline, no tokens burned. I'll post a concrete example in the comments.

Try it:

  npm i -g @modeltoolsprotocol/mtpcli && mtpcli --mtp-describe
I know it's unlikely this will take off as I can't compete with the great might of Anthropic, but I very much welcome collaborators on this. PRs are welcome on the spec, additional SDKs, or anything else. Happy building!

Spec and rationale: <https://github.com/modeltoolsprotocol/modeltoolsprotocol>

CLI tool: <https://github.com/modeltoolsprotocol/mtpcli>

SDKs: TypeScript (<https://github.com/modeltoolsprotocol/typescript-sdk>) | Python (<https://github.com/modeltoolsprotocol/python-sdk>) | Go (<https://github.com/modeltoolsprotocol/go-sdk>) | Rust (<https://github.com/modeltoolsprotocol/rust-sdk>)

nr378
3 hours ago
[-]
Here's a concrete example of what composition looks like in practice.

Say your team has an internal `infractl` CLI for managing your deploy infrastructure. No LLM has ever seen it in training data. You add `--mtp-describe` (one function call with any of the SDKs), then open Claude Code and type:

  > !mtpcli
  > How do I use infractl?
The first line runs `mtpcli`, which prints instructions teaching the LLM the `--mtp-describe` convention: how to discover tools, how schemas map to CLI invocations, how to compose with pipes. The second line causes the LLM to run `infractl --mtp-describe`, get back the full schema, and understand a tool it has never seen in training data. Now you say:

  > Write a crontab entry that posts unhealthy pods to the #ops Slack channel every 5 minutes
And it composes your custom CLI with a third-party MCP server it's never touched before:

  */5 * * * * infractl pods list --cluster prod --unhealthy --json \
    | mtpcli wrap --url "https://slack-mcp.example.com/v1/mcp" \
        postMessage -- --channel "#ops" --text "$(jq -r '.[] | .name')"
Your tool, a Slack MCP server, and `jq`, in a pipeline the LLM wrote because it could discover every piece. That script can run in CI, or on a Raspberry Pi. No tokens burned, no inference round-trips. The composition primitives have been here for 50 years. Bash is all you need!
reply
duwip
2 hours ago
[-]
> The ~2,500 MCP servers out there become composable CLI tools you can script and run in CI without an LLM in the loop

That's pretty cool.

The harder part is getting the various coding agents to run !mtpcli by default + getting cli tool maintainers to add --mtp-describe. Creating a standard is hard, I guess.

reply
jangojones
3 hours ago
[-]
I ran into this with Claude too. Using the gh CLI worked far better than the GitHub MCP. The model already knows and “understands” CLIs, and this feels like the right abstraction level for making tools discoverable without breaking composability.

Obviously the model has likely been trained on gh CLI already, but that just reinforces the idea that CLIs are a natural interface for models when discovery is handled well.

reply