You could also turn around and say that it's a good context boundary for the LLM, which is true, but then you're back at the same problem microservices have always had: they push the integration work onto another team so that developers can make it Not Their Problem. Which is, honestly, just a restatement of the exact thing you just said framed in a different way.
I think your statement can also be used against event driven architecture - having this massive event bus that controls all the levers of your distributed system always sounds great in theory, but in practice you end up with almost the exact same problem as what you just described, because the tooling for offering those integration guarantees is just not nearly as robust as a centralized database.
Even if the same dev is driving the work, it's like having a junior engineer do a cross-service staggered release and letting them skip the well-defined existing API surfaces. The entire point of microservices is that you are making that hard/introducing friction to that stuff on purpose so things can be released and developed separately. IMO it has an easy solution too, just direct one agent per repo/service the way you would if you really did need to make that kind of change anyway and wanted to do it through junior developers.
> hey push the integration work onto another team so that developers can make it Not Their Problem
I mean yes and no, this is oftentimes completely intended from the perspective of the people making the decision to do microservices. It's a way to constrain the way people develop and coordinate with each other precisely because you don't want all 50 of your developers running amok on the entire codebase (especially when they don't know how or why that code was structured some way originally, and they aren't very skilled or conscientious in integrating things maintainably or testing existing behavior).
> so that developers can make it Not Their Problem
IMO this is partially orthogonal to the problem. Microservices doesn't necessarily mean you can't modify another team's code. IMO that is a generally pretty counter productive mindset for engineering teams where codebase is jealously guarded like that. It just means you might need to send another team a PR or coordinate with them first rather than making the change unilaterally. Or maybe you just want to release the things separately; lately I find myself wanting that more and more because past a certain size agents just turn repos into balls of mud or start re implementing things.
Once Claude Code came out something new clicked with me regarding how agent coordination will actually end up working in practice. Unless you want to spend a time of time trying to prompt them into understanding separation of concerns (Claude Code specifically seems to often ignore these instructions/have conflicting default instructions), if you want to scale out agent-driven development you need to enforce separation of concerns at the repo-level.
It's basically the same problem as it was 5-10 years ago, if you have a bunch of logic that interacts with each other across "team"/knowledge/responsibility/feature boundaries, interacting with your dependencies over an API, developing in separate repos, and building + rolling out the logic separately helps enforce separation of concerns and integration around well-specified interfaces.
In an ideal world, Claude Code would not just turn every repo into a ball of mud, at least if you asked it nicely and gave it clear guidelines to follow to prevent that. That was always true with monoliths and trying to coordinate/train less experienced developers to not do the same thing when, and it turns out we didn't live in an ideal world back then so we used microservices to prevent that more structurally! History sure does rhyme.
Most mid-scale problems don't demand a micro-services solution, with data ownership, delineation of service responsibilities, etc. Monoliths with single-truth databases work just fine.
Micro-services are an organizational convenience, and not a technology response to complexity. It's easier to manage groups of people than it is to manage complex technology. That's fine if you need it. Normally, it's not.
If it works for you, sure, go ahead. If it doesn't, don't chase a topical orthodoxy.
2. Having at least some level of ability ro run heterogenous workloads in your production environment (ie being able to flip a switch and do microservices if you decide to) is very useful if you need to do more complicated migrations or integrate OSS/vendor software/whip up a demo on short notice. Because oftentimes you may not want to "do microservices" ideologically or as a focal point for development, but you can easily end up in a situation where you want "a microservice", and there can be an unnecessarily large number of obstacles to doing that if you've built all your tooling and ops around the assumption of "never microservices"
3. If you're working with open source software products and infra a lot it's just way easier to eg launch a stalwart container to do email hosting than to figure out how to implement email hosting in your existing db and monolith. Also see above, if you find a good OSS project to help you do something much faster or more effectively, it's good for it to be easy to put it in prod.
4. Claude Code and less experienced or skilled developers don't understand separation of concerns. Now that agentic development is picking up, even orgs that didn't need the organizational convenience before may find themselves needing it now. Personally, this has been a major consideration in how I structure new projects once I became aware of the problem.
* It makes it easier to use multiple different languages.
* You can easily scale different parts of your application independently.
* Organisational convenience.
Usually though you don't need any of that.
Just to add that I think some people assume this is something they need, even when there's no basis for it.
Do you actually need 1 instance that handles Foo-type requests, and 99 instances that handle Bar-type requests, or would you be fine with 100 instances that are capable of handling either Foo or Bar as necessary?
The distinction only really matters if there is some significant fixed overhead associated with being available to serve Foo requests, such that running an extra 99 of those processes has a cost, regardless of how much traffic they serve. For instance, if every Foo server needs GBs of static data to be held in RAM, or needs to be part of a Paxos group, or something like that.
But if your services are stateless, then you probably don't benefit at all from scaling them independently.
Quite easy to run into hardware, OS or especially language runtime (looking at you Ruby, Python) limitations when pushing even moderately high traffic even for totally stateless applications
Your first point is valid. There are few ways to get it, and it's not clear if services are harder or easier than the alternatives.
There are benefits to keeping some things small and isolated. Teams that have problems with micro services are doing it wrong.
Edit: Genuinely curious about the downvotes here. The concept directly maps to all the reasons the article author cited.
At the beginning, it's nice to know that team A is responsible for service B, so that if there are any bugs/feature requests, you know who to go to.
This neat story falls apart if team A gets dissolved or merged with another team. The new group owning service B doesn't feel the same level of ownership. When maintenance work comes in (security vulnerabilities, stack modernization, migrations), the team is forced to internalize it rather than share it with other teams.
Operational maintenance also becomes a lot harder - deploying many services correctly is much more difficult locally, in pre-prod, and in production. Any benefits you get from writing a single HTTP server quickly are erased in the time spent on integration work with the rest of the company.
Here is a blog post that makes a more expansive argument against microservices: https://www.docker.com/blog/do-you-really-need-microservices...
Yet another argument that applies better or equally well to shared libraries.
I've made arguments for creating services at work. But it seems that every time somebody tries to make a reason for them at the web, it's not a reason to use services.
There may be even 500 microservices and their number is growing rapidly. The situation might no longer be under control, sometimes even the responsibility for maintaining them is unclear or "shared". It is easier to implement a new microservice than track down who could implement something there.
I have encountered this problem several times, so I started a side project to bring such situations under control. It is still alpha, but first part -- scoping the problem -- is already pretty useful, allowing you to select, visualize and tag etc. microservices.
If anybody is interested, the code is here:
The cost of change is radically increased using micro services.
With microservices you scatter the business complexity across multiple services and lift a considerable amount of complexity from the easily testable code base and into the infrastructure.
IMHO doing a micro service architecture for this reason is horrible.
We have had instances of microservice architecture where doing one change required changes in 4 different microservices defeating the whole point. Obviously this is bad.
A “well thought out” architecture that holds up over years is a pipe dream. There will always be changes that require a rethinking of the whole system.
Then the problem is you can't refactor it even if you wanted to, because other teams have taken dependencies on your existing APIs etc. Cleaning up becomes a risky quarter-long multi-team project instead of a straightforward sprint-long ticket.
I think AI is going to reverse the microservice trend too. The main problem that microservices improves is allowing teams to work more independently. Deployments, and especially rollbacks if there's a bug, can be quick for a microservice but take lots of coordination for monoliths. With AI (once/if it gets better), I imagine project work to be a lot more serial, since they work so much faster, and it'll be able to deploy one project at a time. A lot less chance of overlapping or incompatible changes that block monolith rollouts for weeks until they're resolved. A lot less extra feature-flagging of every single little change since you can just roll back the deployment if needed. Plus, a single codebase will be a lot easier for a single AI system to understand and work with e2e.
In what way do microservices handle this better? When you have a feature request that cuts across service boundaries you have to coordinate multiple teams to change and deploy at the same time.
In my team, we once had 5 deployments of our service in a day and we did not have to coordinate it with anyone outside the team. This is an amazing benefit. Not many realise the cost we pay due to coordination. I can't even imagine how this would work in a monolith. Maybe we would meticulously write code in our dev environment and kinda pray that it works properly in production when our code is released say once a day.
Real life is more messy and it is great that I had the option to deploy 5 times to production. That fast feedback loop is much appreciated.
From personal experience, the problem is complexity. Which ends up costing money. At a certain scale, splitting off separate services may or may not make sense. But always building anything and everything as a set of black boxes that only communicate over network APIs, each potentially with their own database; is one of those ideas that sounds like fun until you've had a taste of the problems involved; especially if you have strong ACID requirements, or want to debug pieces in isolation.