The accident isn't that somehow we got a protocol to do things we couldn't do before. As other comments point out MCP (the specificaiton), isn't anything new or interesting.
No, the accident is that the AI Agent wave made interoperability hype, and vendor lock-in old-fashioned.
I don't know how long it'll last, but I sure appreciate it.
For those that don't remember/don't know, everything network related in Windows used to use their own, proprietary setup.
Then one day, a bunch of vendors got together and decided to have a shared standard to the benefit of basically everyone.
But the way I see it, AI agents created incentives for interoperability. Who needs an API when everyone is job secure via being a slow desktop user?
Well, your new personal assistant who charges by the Watt hour NEEDS it. Like when the CEO will personally drive to get pizzas for that hackathon because that’s practically free labor, so does everyone want everything connected.
For those of us who rode the API wave before integrating became hand-wavey, it sure feels like the world caught up.
I hope it will last, but I don’t know either.
I tried to find a rebuttal to this article from Slack, but couldn't. I'm on a flight with slow wifi though. If someone from Slack wants to chime in that'd be swell, too.
I've made the argument to CFOs multiple times over the years why we should continue to pay for Slack instead of just using Teams, but y'all are really making that harder and harder.
[0]: https://www.reuters.com/business/salesforce-blocks-ai-rivals...
As it is, I'm going to propose that we move more key conversations outside of slack so that we can take advantage of feeding it into ai. It's a small jump from that to looking for alternatives.
In my experience Claude and Gemini can take over tool use and all we need to do is tell them the goal. This is huge, we always had to specify the steps to achieve anything on a computer before. Writing a fixed program to deal with dynamic process is hard, while a LLM can adapt on the fly.
IFTTT was announced Dec. 14, 2010 and launched on Sept. 7. 2011.
Zapier was first pitched Sept. 30, 2011 and their public beta launched May 2012.
> It (the main benefit?) is the LLM itself, if it knows how to wield tools.
LLMs and their ability to use tools are not a benefit or feature that arose from MCP. There has been tool usage/support with various protocols and conventions way before MCP.
MCP doesn't have any novel aspects that are making it successful. It's relatively simple and easy to understand (for humans), and luck was on Anthropic's side. So people were able to quickly write many kinds of MCP servers and it exploded in popularity.
Interoperability and interconnecting tools, APIs, and models across providers are the main benefits of MCP, driven by its wide-scale adoption.
Perhaps but we see current hypes like Cursor only using MCP one way; you can feed into Cursor (eg. browser tools), but not out (eg. conversation history, context etc).
I love Cursor but this "not giving back" mentality originally reflected in it's closed source forking of VS Code leaves an unpleasant taste in the mouth and I believe will ultimately see it lose developer credibility.
Lock-in still seems to be locked in.
Though the general API lockdown was started long before that, and like you, I’m skeptical that this new wave of open access will last if the promise doesn’t live up to the hype.
The APIs of old were about giving you programmatic access to publicly available information. Public tweets, public Reddit posts, that sort of thing. That's the kind of data AI companies want for training, and you aren't getting it through MCP.
In practice, the distinction is little more than the difference between different HTTP verbs, but I think there is a real difference in what people are intending to enable when creating an MCP server vs. standard APIs.
To this point, GUIs; going forward, AI agents. While the intention rhymes, the meaning of these systems diverge.
the APIs that used to be free and now aren't were just slightly ahead of the game, all these new MCP servers aren't going to be free either.
If I have a app who's backend needs to connect to, say, a CRM platform - I wonder if instead of writing APIs to connect to Dynamics or Salesforce or Hubspot specifically, if there's benefit in abstracting a CRM interface with an MCP so that switching CRM providers later (or adding additional CRMs) becomes easier?
(The mashup hype was incredible, btw. Some of the most ridiculous web contraptions ever.)
(I hereby claim the name "DJ MCP"...)
Mcp is a fad, it’s not long term tech. But I’m betting shoveling data at llm agents isn’t. The benefits are too high for companies to allow vendors to lock the data away from them.
I'd bet that while "shoveling data at llm agents" might not be a fad, sometime fairly soon doing so for free while someone else's VC money picks up the planet destroying data center costs will stop being a thing. Imagine if every PHP or Ruby on Rails, or Python/Django site had started out and locked themselves into a free tier Oracle database, then one day Oracle's licensing lawyers started showing up to charge people for their WordPress blog.
One that won't be supported by any of the big names except to suck data into their walled gardens and lock it up. We all know the playbook.
> All of this has happened before, and all of this will happen again.
”It” here being the boom and inevitable bust of interop and open API access between products, vendors and so on. As a millenial, my flame of hope was lit during the API explosion of Web 2.0. If you’re older, your dreams were probably crushed already by something earlier. If you’re younger, and you’re genuinely excited about MCP for the potential explosion in interop, hit me up for a bulk discount on napkins.
I actually disagree with the OP in this sub-thread:
> "No, the accident is that the AI Agent wave made interoperability hype, and vendor lock-in old-fashioned."
I don't think that's happened at all. I think some interoperability will be here to say - but those are overwhelmingly the products where interoperability was already the norm. The enterprise SaaS that your company is paying for will support their MCP servers. But they also probably already support various other plugin interfaces.
And they're not doing this because of hype or new-fangledness, but because their incentives are aligned with interoperability. If their SaaS plugins into [some other thing] it increases their sales. In fact the lowering of integration effort is all upside for them.
Where this is going to run into a brick wall (and I'd argue: already has to some degree) is that closed platforms that aren't incentivized to be interoperable still won't be. I don't think we've really moved the needle on that yet. Uber Eats is not champing at the bit to build the MCP server that orders your dinner.
And there are a lot of really good reasons for this. In a previous job I worked on a popular voice assistant that integrated with numerous third-party services. There has always been vehement pushback to voice assistant integration (the ur-agent and to some degree still the holy grail) because it necessarily entails the service declaring near-total surrender about the user experience. An "Uber Eats MCP" is one that Uber has comparatively little control over the UX of, and has poor ability to constrain poor customer experiences. They are right to doubt this stuff.
I also take some minor issue with the blog: the problem with MCP as the "everything API" is that you can't really take the "AI" part out of it. MCP tools are not guaranteed to communicate in structured formats! Instead of getting an HTTP 401 you will get a natural language string like "You cannot access this content because the author hasn't shared it with you."
That's not useful without the presence of a NL-capable component in your system. It's not parseable!
Also importantly, MCP inputs and outputs are intentionally not versioned nor encouraged to be stable. Devs are encouraged to alter their input and output formats to make them more accessible to LLMs. So your MCP interface can and likely will change without notice. None of this makes for good API for systems that aren't self-adaptive to that sort of thing (i.e., LLMs).
In all seriousness though, I think HN has a larger-than-average amount of readers who've worked or studied around semantic web stuff.
> Want it to order coffee when you complete 10 tasks? MCP server.
With a trip through an LLM for each trivial request? A paid trip? With high overhead and costs?
Before, "add a public API to this comic reader/music player/home accounting software/CD archive manager/etc." would be a niche feature to benefit 1% of users. Now more people will expect to hook up their AI assistant of choice, so the feature can be prioritized.
The early MCP implementations will be for things that already have an API, which by itself is underwhelming.
You would think Apple would have a leg up here with AppleScript already being a sanctioned way to add scriptable actions across the whole of macOS, but as far as I can tell they don't hook it up to Siri or Apple Intelligence in any way.
I always imagined software could be written with a core that does the work and the UI would be interchangeable. I like that the current LLM hype is causing it to happen.
I'm just baffled no software vendor has already come up with a subscription to access the API via MCP.
I mean obviously paid API access is nothing new, but "paid MCP access for our entreprise users" is surely on the pipeline everywhere, after which the openness will die down.
Optionality will kill adoption, and these things are absolutely things you HAVE to be able to play with to discover the value (because it’s a new and very weird kind of tool that doesn’t work like existing tools)
Heck, if AIs are at some point given enough autonomy to simply be given a task and a budget, there'll be efforts to try to trick AIs into thinking paying is the best way to get their work done! Ads (and scams) for AIs to fall for!
We're already there, just take a look at the people spending $500 a day on Claude Code.
Say i'm building a app and I want my users to be able to play spotify songs. Yea, i'll hit the spotify api. But now, say i've launched my app, and I want my users to be able to play a song from sonofm when they hit play. Alright, now I gotta open up the code and do some if statements hard code the sonofm api and ship a new version, show some update messages.
MCP is literally just a way to make this extensible so instead of hardcoding this in, it can be configured at runtime
There is a confused history where Roy Fielding described REST, then people applied some of that to JSON HTTP APIs, designating those as REST APIs, then Roy Fielding said “no you have to do HATEOAS to achieve what I meant by REST”, then some people tried to make their REST APIs conform to HATEOAS, all the while that change was of no use to REST clients.
But now with AI it actually can make sense, because the AI is able to dynamically interpret the hypermedia content similar to a human.
Not sure I'm understanding your point in hypermedia means there is human in the loop. Can you expand?
Further reading:
- https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
They are structured in a way that machine program could parse and use.
I don't believe it requires human-in-the-loop, although that is of course possible.
Your understanding is incorrect, the links above will explain it. HATEOAS (and REST, which is a superset of HATEOAS) requires a consumer to have agency to make any sense (see https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...)
MCP could profitably explore adding hypermedia controls to the system, would be interesting to see if agentic MCP APIs are able to self-organize:
Well, Swagger was there from the start, and there's nothing stopping an LLM from connecting to an openapi.json/swagger.yaml endpoint, perhaps meditated by a small xslt-like filter that would make it more concise.
Interestingly, ActiveX was quite the security nightmare for very similar reasons actually, and we had to deal with infamous "DLL Hell". So, history repeats itself.
(Even if only the former, it would of course be a huge step forward, as I could have the LLM generate schemata. Also, at least, everyone is standardizing on a base protocol now, and a way to pass command names, arguments, results, etc. That's already a huge step forward in contrast to arbitrary Rest+JSON or even HTTP APIs)
To speculate about this, perhaps the informality is the point. A full formal specification of something is somewhere between daunting and Sisyphean, and we're more likely to see supposedly formal documentation that nonetheless is incomplete or contains gaps to be filled with background knowledge or common sense.
A mandatory but informal specification in plain language might be just the trick, particularly since vibe-APIing encourages rapid iteration and experimentation.
https://modelcontextprotocol.io/specification/2025-03-26/ser...
You mean, like OpenAPI, gRPC, SOAP, and CORBA?
* https://github.com/grpc/grpc-java/blob/master/documentation/...
* https://grpc.io/docs/guides/reflection/
You can then use generic tools like grpc_cli or grpcurl to list available services and methods, and call them.
REST APIs have 5 or 6 ways of doing that, including "read it from our docs site", HATEOAS, OAS running on an endpoint as part of the API.
MCP has a single way of listing endpoints.
> REST APIs have 5 or 6 ways of doing that
You think nobody's ever going to publish a slight different standard to Anthropic's MCP that is also primarily intended for LLMs?
OpenAPI, OData, gRPC, GraphQL
I'm sure I'm missing a few...
To elaborate on this, I don't know much about MCP, but usually when people speak about it is in a buzzword-seeking kind of way, and the people that are interested in it make these kinds of conceptual snafus.
Second, and this applies not just to MCP, but even things like JSON, Rust, MongoDB. There's this phenomenon where people learn the complex stuff before learning the basics. It's not the first time I've cited this video on Homer studying marketing where he reads the books out of order https://www.youtube.com/watch?v=2BT7_owW2sU . It makes sense that this mistake is so common, the amount of literature and resources is like an inverted pyramid, there's so little classical foundations and A LOT of new stuff, most of which will not stand the test of time. Typically you have universities to lead the way and establish a classical corpus and path, but being such a young discipline, 70 years in and we are still not finding much stability, Universities have gone from teaching C, to teaching Java, to teaching Python (at least in intro to CS), maybe they will teach Rust next, but this buzzwording seems more in line with trying to predict the future, and there will be way more losers than winners in that realm. And the winners will have learned the classicals in addition to the new technology, learning the new stuff without the classics is a recipe for disaster.
MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.
I don't have a high opinion of MCP and the hype it's generating is ridicolous, but the problem it supposedly solves is real. If it can work as an excuse to have providers expose an API for their functionality like the article hopes, that's exciting for developers.
I don't think this is true.
My Claude Code can:
- open a browser, debug a ui, or navigate to any website
- write a script to interact with any type of accessible api
All without MCP.
Within a year I expect there to be legitimate "computer use" agents. I expect agent sdks to take over llm apis as defacto abstractions for models, and MCP will have limited use isolated to certain platforms - but with that caveat that an MCP-equipped agent performs worse than a native computer-use agent.
I had similar skepticism initially, but I would recommend you dip toe in water on it before making judgement
The conversational/voice AI tech now dropping + the current LLMs + MCP/tools/functions to mix in vendor APIs and private data/services etc. really feels like a new frontier
It's not 100% but it's close enough for a lot of usecases now and going to change a lot of ways we build apps going forward
What blocked me initially was watching NDA'd demos a year or two back from a couple of big software vendors on how Agents were going to transform enterprise ... what they were showing was a complete non-starter to anyone who had worked in a corporate because of security, compliance, HR, silos etc. so I dismissed it
This MCP stuff solves that, it gives you (the enterprise) control in your own walled garden, whilst getting the gains from LLMs, voice etc. ... the sum of the parts is massive
It more likely wraps existing apps than integrates directly with them, the legacy systems becoming data or function providers (I know you've heard that before ... but so far this feels different when you work with it)
That is the odd part. I am far from being part of that group of people. I‘m only 25, I joined the industry in 2018 as part of an training program in a large enterprise.
The odd part is, many of the promises are a bit Déjà-vu even for me. „Agents going to transform the enterprise“ and other promises do not seem that far off the promises that were made during the low code hype cycle.
Cynically, the more I look at the AI projects as an outsider, the more I think AI could fail in enterprises largely because of the same reason low code did. Organizations are made of people and people are messy, as a result the data is often equally messy.
Further, there are 2 kinds of users that consume the output of software. a) humans, and b) machines.
Where LLMs shine are in the 2a usecases, ie, usecases where accuracy does not matter and humans are end-users. there are plenty of these usecases.
The problem is that LLMs are being applied to 1a, 1b usecases where there is going to be a lot of frustration.
how would ingesting Ableton Live's documentation help Claude create tunes in it for instance?
I made this MCP server so that you could chat with real-time data coming from the API - https://github.com/AshwinSundar/congress_gov_mcp. I’ve actually started using it more to find out, well, what the US Congress is actually up to!
I doubt the middleware will disappear, it's needed to accomdate the evolving architecture of LLMs.
MCP is already a useless layer between AIs and APIs, using it when you don't even have GenAI is simply idiotic.
The only redeeming quality of MCP is actually that it has pushed software vendors to expose APIs to users, but just use those directly...
Isn't that what we had about 20 years ago (web 2.0) until they locked it all up (the APIs and feeds) again? ref: this video posted 18 years ago: https://www.youtube.com/watch?v=6gmP4nk0EOE
(Rewatching it in 2025, the part about "teaching the Machine" has a different connotation now.)
Maybe it's that the protocol is more universal than before, and they're opening things up more due to the current trends (AI/LLM vs web 2.0 i.e. creating site mashups for users)? If it follows the same trend then after a while it will become enshittified as well.
At work I built an LLM-based system that invoke tools. We started before MCP existed, and just used APIs (and continue to do so).
Its engineering value is nil, it only has marketing value (at best).
Anthropic wants to define another standard now btw https://www.anthropic.com/engineering/desktop-extensions
MCP is for technical users.
(Maybe read the link you sent, it has nothing to do with defining a new standard)
You can already do this as long as your client has access to a HTTP MCP.
You can give the current generation of models an openAPI spec and it will know exactly what to do with it.
https://blog.runreveal.com/introducing-runreveal-remote-mcp-...
Once cryptocurrency was a thing this absolutely needed to exist to protect your accounts from being depleted by a hack. (like via monthly limits firewall)
Now we need universal MCP <-> API to allow both programmatic and LLM to the same thing. (because apparently these AGI precursors arent smart enough to be trained on generic API calling and need yet another standard: MCP?)
Maybe I'm not fully understanding the approach, but it seems like if you started relying on third-party MCP servers without the AI layer in the middle, you'd quickly run into backcompat issues. Since MCP servers assume they're being called by an AI, they have the right to make breaking changes to the tools, input schemas, and output formats without notice.
For example, the Kagi MCP server interacts with the Kagi API. Wouldn't you have a better experience just using that API directly then?
On another note, as the number of python interpreters running on your system increases with the number of MCP servers, does anyone think there will be "hosted" offerings that just provide a sort of "bridge" running all your MCP servers?
The additional API is /list-tools
And all the clients consume the /list-tools first and then rest of the APIs depending on which tool they want to call.
On the other hand, in the absence of an existing API, you can implement your MCP server to just [do the thing] itself, and maybe that's where the author sees things trending.
The next step would be Microsoft attempting to make their registry the de facto choice for developers and extending with Windows-specific verbs.
Then, by controlling what's considered "secure", they can marginalize competitors.
I’m not familiar with the details but I would imagine that it’s more like:
”An MCP server which re-exposes an existing public/semi-public API should be easy to implement, with as few changes as possible to the original endpoint”
At least that’s the only way I can imagine getting traction.
Interoperability means user portability. And no tech bro firm wants user portability, they want lock in and monopoly.
Come to think of it - I don't know what the modern equivalent would be. AppleScript?
"IBM also once engaged in a technology transfer with Commodore, licensing Amiga technology for OS/2 2.0 and above, in exchange for the REXX scripting language. This means that OS/2 may have some code that was not written by IBM, which can therefore prevent the OS from being re-announced as open-sourced in the future. On the other hand, IBM donated Object REXX for Windows and OS/2 to the Open Object REXX project maintained by the REXX Language Association on SourceForge."
https://en.wikipedia.org/wiki/Rexx
https://en.wikipedia.org/wiki/OS/2#Petitions_for_open_source
It basically powers all inter communication in Windows.
Apps can expose endpoints that can be listed, and external processes can call these endpoints.
We had a non technical team member write an agent to clean up a file share. There are hundreds of programming languages, libraries, and apis that enabled that before MCP but now people don’t even have to think about it. Is it performant no, is it the “best” implementation absolutely not. Did it create enormous value in a novel way that was not possible with the resources, time, technology we had before 100%. And that’s the point.
This has to be BS(or you think its true) unless it was like 1000 files. In my entire career I've seen countless crazy file shares that are barely functional chaos. In nearly ever single "cleanup" attempt I've tried to get literally ANYONE from the relevant department to help with little success. That is just for ME to do the work FOR THEM. I just need context from them. I've on countless occasion had to go to senior management to force someone to simply sit with me for an hour to go over the schema they want to try to implement. SO I CAN DO IT FOR THEM and they don't want to do it and literally seemed incapable of doing so when forced to. COUNTLESS Times. This is how I know AI is being shilled HARD.
If this is true then I bet you anything in about 3-6 months you guys are going to be recovering this file system from backups. There is absolutely no way it was done correctly and no one has bothered to notice yet. I'll accept your downvote for now.
Cleaning up a file share is 50% politics, 20% updating procedures, 20% training and 10% technical. I've seen companies go code red and practically grind to a halt over a months long planned file share change. I've seen them rolled back after months of work. I've seen this fracture the files shares into insane duplication(or more) because despite the fact it was coordinated, senior managers did not as much as inform their department(but attended meetings and signed off on things) and now its too late to go back because some departments converted and some did not. I've seen helpdesk staff go home "sick" because they could not take the volume of calls and abuse from angry staff afterwards.
Yes I have trauma on this subject. I will walk out of a job before ever doing a file share reorg again.
You'll roll it out in phases? LOL
You'll run it in parallel? LOL
You'll do some <SUPER SMART> thing? LOL.
I'm too young to be posting old_man_yells_at_cloud.jpg comments...
Take out the LLM and you're not that far away from existing protocols and standards. It's not plugging your app into any old MCP and it just works (like the USB-C example).
But, it is a good point that the hype is getting a lot of apps and services to offer APIs in a universal protocol. That helps.
> But it worked, and now Rex's toast has HDMI output.
> Toaster control protocols? Rex says absolutely.
But you’re right, it does kind of miss the point.
A2A (agent 2 agent) mechanism is an another accidental discovery for the interoperability across agent boundaries
I call bullshit, mainly because any natural language is ambiguous at best, and incomplete at worst.
I’m convinced that the only reason why MCP became a thing is because newcomers weren’t that familiar with OpenAPI and other existing standards, and because a protocol that is somehow tied to AI (even though it’s not, as this article shows) generates a lot of hype these days.
There’s absolutely nothing novel about MCP.
I can't be the only person that non-ironically has this.
A REST API makes sense to me…but this is apparently significantly different and more useful. What’s the best way to think about MCP compared to a traditional API? Where do I get started building one? Are there good examples to look at?
With a traditional API, people can build it any way they want, which means you (the client) need API docs.
With MCP, you literally restrict it to 2 things: get the list of tools, and call the tool (using the schema you got above). Thus the key insight is just about: let's add 1 more endpoint that lists the APIs you have, so that robots can find it.
Example time: - Build an MCP server (equivalent of "intro to flask 101"): https://developers.cloudflare.com/agents/guides/remote-mcp-s... - Now you can add it to Claude Desktop/Cursor and see what it does - That's as far as i got lol
Then use FastMCP to write an MCP server in Python - https://github.com/jlowin/fastmcp
Finally, hook it up to an LLM client. It’s dead simple to do in Claude Code, create an .mcp.json file and define the server’s startup command.
The use case in AI is sort of reversed such that the code runs on your computer
It is dependent on agents actually creating new demand for APIs and MCP being successful as a way to expose them.
> Anyone else feel like this article was written with ChatGPT
comments are actually written by ChatGPT.
What was that character in “South Park” that has a hand puppet? (White noise, flatline sound)
Or how about ‘oh it looks like your client is using SOAP 1.2 but the server is 1.1 and they are incompatible’. That was seriously a thing. Good luck talking to many different servers with different versions.
SOAP wasn’t just bad. It was essentially only useable between same languages and versions. Which is an interesting issue for a layer whose entire purpose was interoperability.
MCP is fulfilling the promise of AI agents being able to do their own thing. None of this is unintended, unforeseen or particularly dependent on the existence of MCP. It is exciting, the fact that AI has this capability captures the dawn of a new era. But the important thing in the picture isn't MCP - it is the power of the models themselves.
It's almost poetic how we're now incentivizing interoperability simply because our digital buddies have to eat (or rather, drink) to stay awake. Who would've thought that the quest for connectivity would be driven by the humble Watt hour?
I guess when it comes down to it, even AI needs a good power-up - and hopefully, this time around, the plugins will stick. But hey, I'll believe it when my assistant doesn't crash while trying to order takeout.
Then you aren't exploring a novel concept, and you are better served learning about historical ways this challenge has been attempted rather than thinking it's the first time.
Unix pipes? APIs? POSIX? Json? The list is endless, this is one of the requirements that you can identify as just being a basic one of computers. Another example is anything that is about storing and remembering information. If it's so foundational, there will be tools and protocols to deal with this since the 70s.
For the love of god, before diving into the trendy new thing, learn about he boring old things.