What we do (physics simulation software) doesn’t need all the complexity (in my option as a long time software developer & tester) and software engineering knowledge that splitting stuff into micro services require.
Only have as much complexity as you absolutely need, the old saying “Keep it simple, stupid” still has a lot of truth.
But the path is set, so I’ll just do my best as an individual contributor for the company and the clients who I work with.
And by that, I mean that I have at times seen and/or perhaps even personally used as a cudgel - "This thing has a specific contract and it is implicitly separate and it forces people to remember that if their change needs to touch other parts well then they have to communicate it". In the real world sometimes you need to partition software enough that engineers don't get too far out of the boundaries one way or another (i.e. changes inadvertently breaking something else because they were not focused enough)
there are of course microservices for things like news feed etc, but iirc all of fb.com and mobile app graphql is from the monolith by default.
If not, do the monolith thing as long as you can.
But if you're processing jobs that need hand off to a GPU, just carve out a service for it. Stop lamenting over microservices.
If you've got 100+ engineers and different teams own different things, try microservices. Otherwise, maybe keep doing the monolith.
If your microservice is as thin as leftpad.js and hosts only one RPC call, maybe don't do that. But if you need to carve out a thumbnailing service or authC/authZ service, that's a good boundary.
There is no "one size fits all" prescription here.
The other key difference between microservices and other architectures is that each microservice should do its primary function (temporarily) without hard dependencies, which basically means having a copy of the data that's needed. Service Oriented Architecture doesn't have this as one of its properties which is why I think of it as a mildly distributed monolith. "Distributed monolith" is the worst thing you could call a set of microservices--all the pain without the gains.
Google played a role in popularizing the microservice approach.
When I was at Google, a microservice would often be worked on with teams of 10-30 people and take a few years to implement. A small team of 4-5 people could get a service started, but it would often take additional headcount to productionize the service and go to market.
I have a feeling people overestimate how small microservices are and underestimate how big monorepos are. About 9 times out of ten when I see something called a monorepo it's for a single project as opposed to a repo that spans multiple projects. I think the same is true of microservices. Many things that Amazon or Google considers microservices might be considered monoliths by the outside world.
The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.
A single monolith can be deployed in different ways to handle different scalability requirements. For example, a distinct set of pods responding to endpoints for reports, another set for just websocket connections, and the remaining ones for the rest of the endpoints. Those can be independently scaled but released on the same cadence.
There was a long form article I once read that reasoned through this. Given M number of code sources, there are N number of deployables. It is the delivery system’s job to transform M -> N. M is based on how the engineering team(s) work on code, whether that is a monorepo, multiple repos, shared libraries, etc. N is what makes sense operationally . By making it the delivery system’s job to transform M -> N, then you can decouple M and N. I don’t remember the title of that article anymore. (Maybe someone on the internet remembers).
This ain't new. Any language supporting loading modules can give you the organization benefit of microservices (if you consider it a benefit that is - very few orgs actually benefit from the separation) while operating like a monolith. Java could do it 20+ years ago, just upload your .WAR files to an application server.
Erlang could do it almost 40 years ago.
It can be used to upgrade applications at runtime without stopping the service. That works well in Erlang, it’s designed from the ground up for it. I know of a few places that used that feature.
NGL It was clever enough that every few years I think about trying to redo the concept in another language....
Most companies should be perfectly fine with a service oriented architecture. When you need microservices, you have made it. That's a sign of a very high level of activity from your users, it's a sign that your product has been successful.
Don't celebrate before you have cause to do so. Keep it simple, stupid.
I think the main reason microservices were called “microservices” and not “service-oriented architecture” is that they were an attempt to revive the original SOA concept when “service-oriented architecture” as a name was still tainted by association to a perceived association with XML and the WS-* series of standard (and, ironically, often systems that supported some subset of those standards for interaction despite not really applying the concepts of the architectural style.)
I'm curious, and the specific list of problems and pain points (if--big if!--everyone there agrees what they are) can help more clearly guide the decisions as to what the next architecture should look like--SoA, monolithic, and so on.
I've seen a few regrettable things at one job where they'd ended up shipping a microservice-y design but without much thought about service interfaces. One small example: team A owns a service that runs as an overnight job making customer specific recommendations that get written to a database, and then team B owns a service that surfaces these recommendations as a customer-facing app feature and directly reads from that database. It probably ended up that way as team A had the data scientists and team B had the app backend engineers for that feature and they had to ship something and no architect or senior engineer put their foot down about interfaces.
That'd be pretty reasonable design if team A and team B were the same team, so they could regard the database as internal with no way to access it without going through a service with a well defined interface. Failing that, it's hard to evolve the schema of the data model in the DB without a well defined interface you can use to decouple implementation changes from consumers and where the consuming team B have their own list of quarterly priorities.
Microservices & alternatives aren't really properties of the technical system in isolation, they also depend on the org chart & which teams owns what parts of the overall system.
SOA: pretty good, microservices: probably not a great idea, microservices without SOA: avoid.
For anyone unfamiliar with SOA, there's a great sub-rant in Steve Yegge's 2011 google platforms rant [1][2] focusing on Amazon's switch to service oriented architecture.
[1] https://courses.cs.washington.edu/courses/cse452/23wi/papers... [2] corresponding HN thread from 2011 https://news.ycombinator.com/item?id=3101876
Consider this: every API call (or function call) in your application has different scaling requirements. Every LOC in your application has different scaling requirements. What difference does it make whether you scale it all "together" as a monolith or separately? One step further, I'd argue it's better to scale everything together because the total breathing room available to any one function experiencing unusual load is higher than if you deployed everything separately. Not to mention intra- and inter-process comm being much cheaper than network calls.
The "correct" reasons for going microservices are exclusively human -- walling off too much complexity for one person or one team to grapple with. Some hypothetical big brain alien species would draw the line between microservices at completely different levels of complexity.
(To clarify, I’m not disagreeing with you!)
Operationally, it is very nice to be able to update one discrete function on its own in a patch cycle. You can try to persuade yourself you will pull it off with a modular monolith but the physical isolation of separate services provides guarantees that no amount of testing / review / good intentions can.
However, it's equally an argument for SOA as it is for microservices.
I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.
But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change, if shared memory makes it easy for folks to add direct dependencies between data structures of different components that shouldn't be coupled.
You are describing the "microservice architecture" that I currently loathe at my day job. Fans of microservices would accurately say "well that's not proper microservices; that's a distributed monolith" but my point is that choosing microservices does not enforce any kind of architectural quality at all. It just means that all of your mistakes are now eternally enshrined thanks to Hyrum's Law, rather than being private/unpublished functions that are easy to refactor using "Find All References" and unit tests.
Every compiled language has the concept of "interfaces", and can load even compiled modules/assemblies if you insist on them being built separately.
The compiler will enforce interface compliance much better than hitting untyped JSON endpoints over a network.
I have never done this yet.
But I love the idea of it.
The only time I can think where a JVM might be faster is if you have a multi-tenant setup. In that case, the JVM can be more effective with the GC vs having multiple JVMs running.
- the uploader API
- the uploader UI
- the frame API
- the frame UI
UIs are SSG'd with solid-js and solid-start then served with gin.
It's really fun.
I think what changed things is FAAS came along and people started describing nanoservices as microservices which created really dumb decisions.
I've worked on a true monolith and it wasn't fun. Having your change rolled back because another team made a mistake and it was hard to isolate the two changes was really rough.
And in a few days, we're going to get a long thread about how software is slow and broken and terrible, and nobody will connect the dots. Software sucks because the way we build it sucks. I've had the distinct privilege of helping another team support their Kubernetes monstrosity, which shat the bed around double-digit requests per second, and it was a comedy of errors. What should've otherwise just been some Rails or Django application with HTML templating and a database was three or four different Kubernetes pods, using gRPC to poorly and unnecessarily communicate with each other. It went down all. The. Time. And it was a direct result of the unnecessary complexity of Kubernetes and the associated pageantry.
I would also like to remind everyone that Kubernetes isn't doing anything your operating system can't do, only better. Networking? Your OS does that. Scheduling? Your OS does that. Resource allocation and sandboxing? If your OS is decent, it can absolutely do that. Access control? Yup.
I can confidently say that 95% of the time, you don't need Kubernetes. For the other 5%, really look deep into your heart and ask yourself if you actually have the engineering problems that distributed systems solve (and if you're okay with the other problems distributed systems cause). I've had five or six jobs now that shoehorned Kubernetes into things, and I can confidently say that the juice ain't worth the squeeze.
It would be a blessing if people actually did that, because then they'd avoid useless distributed systems.
> using gRPC to poorly and unnecessarily communicate
At least you've had the blessing of it being gRPC and not having to manually write JSON de/serializers by hand.
> Kubernetes isn't doing anything your operating system can't do
Kubernetes is good if you need to orchestrate across multiple machines. This of course requires an actual need for multiple machines. If you're doing so with underpowered cloud VMs (of which you waste a third of the RAM on K8s itself), just get a single bigger VM and skip K8s.
Which one?
So no I don't want microservices (again), but sometimes it's still the right thing.
3 tier architecture proves time and time again to be robust for most workloads.
Put it into a monorepo so the other teams have visibility in what is going on and can create PRs if needed.
But it is a bit sad that the poster apparently never bought a pizza just for themselves.
The optimum is probably closer to 1 than to 2.
"Traditional" three-tier, where you have a web server talking to an application server talking to a database server, seems like overkill; I'd get rid of the separate application tier.
If your tiers are browser, web API server, database: then three tiers still makes sense.
1. Full-on microservices, i.e. one independent lambda per request type, is a good idea pretty much never. It's a meme that caught on because a few engineers at Netflix did it as a joke that nobody else was in on
2. Full-on monolith, i.e. every developer contributes to the same application code that gets deployed, does work, but you do eventually reach a breaking point as either the code ages and/or the team scales. The difficulty of upgrading core libraries like your ORM, monitoring/alerting, pandas/numpy, etc, or infrastructure like your Java or Python runtime, grows superlinearly with the amount of code, and everything being in one deployed artifact makes partial upgrades either extremely tricky or impossible depending on the language. On the operational and managerial side, deployments and ownership (i.e. "bug happened, who's responsible for fixing?") eventually get way too complex as your organization scales. These are solvable problems though, so it's the best approach if you have a less experienced team.
3. If you're implementing any sort of SoA without having done it before -- you will fuck it up. Maybe I'm just speaking as a cynical veteran now, but IMO lots of orgs have keen but relatively junior staff leading the charge for services and kubernetes and whatnot (for mostly selfish resume-driven development purposes, but that's a separate topic) and end up making critical mistakes. Usually some combination of: multiple services using a shared database; not thinking about API versioning; not properly separating the domains; using shared libraries that end up requiring synchronized upgrades.
There's a lot of service-oriented footguns that are much harder to unwind than mistakes made in a monolithic app, but it's really hard to beat SoA done well with respect to maintainability and operations, in my opinion.
This makes it clear when you might want microservices: you're going through a period of hypergrowth and deployment is a bigger bottleneck than code. This made sense for DoorDash during covid, but that's a very unusual circumstance
The main time I can see this making sense is when the data access patterns are so different in scale and frequency that they're optimizing for different things that cause resource contention, but even then, my question would become do you really need a separate instance of the same kind of DB inside the service, or do you need another global replica/a new instance of a new but different kind of DB (for example Clickhouse if you've been running Postgres and now need efficient OLAP on large columnar data).
Once you get to this scale, I can see the idea of cell-based architecture [1] making sense -- but even at this point, you're really looking at a multi-dimensionally sharded global persistence store where each cell is functionally isolated for a single slice of routing space. This makes me question the value of microservices with state bound to the service writ large and I can't really think of a good use case for it.
[1] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...
This issue with this is schema evolution. As a very simple example, let's say you have a User table, and many microservices accessing this table. Now you want to add an "IsDeleted" column to implement soft deletion; how do you do that? First you need to add the actual column to the database, then you need to go update every single service which queries that table and ensure that it's filtering out IsDeleted=True, deploy all those services, and only then can you actually start using the column. If you must update services in lockstep like this, you've built a distributed monolith, which is all of the complexity of microservices with none of the benefits.
A proper service-oriented way to deal with this is have a single service with control of the User table and expose a `GetUsers` API. This way, only one database and its associated service needs to be updated to support IsDeleted. Because of API stability guarantees--another important guarantee of good SoA--other services will continue to only get non-deleted users when using this API, without any updates on their end.
> You lose data integrity, joining ability, one coherent state of the world, etc.
You do lose this! And it's one of the tradeoffs, and why understanding your domain is so important for doing SoA well. For subsets of the domain where data integrity is important, it should all be in one database, and controlled by one service. For most domains, though, a lot of features don't have strict integrity requirements. As a concrete though slightly simplified example, I work with IoT time-series data, and one feature of our platform is using some ML algorithms to predict future values based on historical trends. The prediction calculation and storage of its results is done in a separate service, with the results being linked back via a "foreign key" to the device ID in the primary database. Now, if that device is deleted from the primary database, what happens? You have a bunch of orphaned rows in the prediction service's database. But how big of a deal is this actually? We never "walk back" from any individual prediction record to the device via the ID in the row; queries are always some variant of "give me the predictions for device ID 123". So the only real consequence is a bit of database bloat, which can be resolved via regularly scheduled orphan checking processes if it's a concern.
It's definitely a mindshift if you're used to a "everything in one RDBMS linked by foreign keys" strategy, but I've seen this successfully deployed at many companies (AWS, among others).
For example, I work in a small company with a data processing pipeline that has lots of human in the loop steps. A monolith would work, but a major consideration with it being a small company is cloud cost, and a monolith would mean slow load times in serverless or persistent node costs regardless of traffic. A lot of our processing steps are automated and ephemeral, and across all our customers, the data tends to look like a wavelet passing through the system with an average center of mass mostly orbiting around a given step. A service oriented architecture let us:
- Separate steps into smaller “apps” that run on demand with serverless workers.
- avoid the scaling issues of killing our database with too many concurrent connections by having a single “data service”—essentially organizing all the wires neatly.
- ensure that data access (read/write on information extracted from our core business objects) happens in a unified manner, so that we don’t end up with weird, fucky API versioning.
- for the human in the loop steps, data stops in the job queue at a CRUD app as a notification, where data analysts manually intervene.
A monolith would have been an impedance mismatch for the inherent “assembly line” model here, regardless of dogma and the fact that yes, a monolith could conceivably handle a system like this without as much network traffic.
You could argue that the data service is a microservice. It’s a single service that serves a single use case and guards its database access behind an API. I would reply to any consternation or foreboding due to its presence in a small company by saying “guess what, it works incredibly well for us. Architecture is architecture: the pros and cons will out, just read them and build what works accordingly.”
However I go the other way than you: I have found AI needs as much context as possible and that means it understands monoliths (or fatter architectures) better. At least, the agentic style approach where it has access to the whole git tree / source repository. I find things break down a lot when changes are needed across source repositories.
I want just services.
The current hell is x years of undisciplined (in terms of perf and cost) new ORM code being deployed (SQLAlchemy). We do an insane number of read queries per second relative to our usage.
I honestly think the newish devs we have hired don't understand SQL at all. They seem to think of it as some arcane low level thing people used in the 80s/90s.
I had an architect bemoan the suggestion we use a microservice, until he had to begrudgingly back down when he was told that the function we were talking about (Running a CLIP model) would mean attaching a GPU to every task instance.
I was stunned... He comes up with this stuff all the time. Thanks Matt.
And, now, SAAS is finally making the jump to the last position - hybrid/mini
What I want is a lightweight infrastructure for macro-services. I want something to handle the user and machine-to-machine authentication (and maybe authorization).
I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
You should be able to spin up everything localy in a docker-compose container.
> I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
K8s makes sense if you have a dedicated team (or atleast engineer) and if you really need need the advanced stuff (blue/green deployments, scaling, etc). Once it's properly setup it's actually a very pleasant platform.
If you don't need that Docker (or preferable Podman) is indeed the way to go. You can actually go quite far with a VPS or a dedicated server these day. By the time you outgrow the most expensive server you can (reasonable) buy you can probably afford the staff to roll out a "big boy" infrastructure.
We're using Docker/Podman with docker-compose for local development, and I can spin up our entire stack in seconds locally. I can attach a debugger to any component, or pull it out of the Docker and just run it inside my IDE. I even have an optional local Uptrace installation for OTEL observability testing.
My problem is that our deployment infrastructure is different. So I need to maintain two sets of descriptions of our services. I'd love a solution that would unify them, but so far nothing...
Docker Compose for local development is fine. If your K8s setup is crazy complex that you need to test it locally, please stop.
It's trivial with my current setup, but not really possible with Tilt.
With Compose, you get proper n-tier application containerization with immutability. By adding an infrastructure-as-code tool such as Terraform to abstract your IT environment, you can deploy your application on-premises, in the cloud, or at a customer site with a single command.
For clustering needs, there’s Incus, and finally, Kubernetes for very fast scalability (<mn), massive deployments on large clusters, cloud offloading, and microservices.
Almost nobody truly needs the complexity of Kubernetes. The ROI simply isn’t there for the majority of use cases.
Large, shared database tables have been a huge issue in the last few jobs that I have had, and they are incredibly labor intensive to fix.
It's partly why I've realised more over time that learning computer science fundamentals actually ends up being super valuable.
I'm not talking about anything particularly deep either, just the very fundamentals you might come across in year one or two of a degree.
It sort of hooks back in over time as you discover that these people decades ago really got it and all you're really doing as a software engineer is rediscovering these lessons yourself, basically by thinking there's a better way, trying it, seeing it's not better, but noticing the fundamentals that are either being encouraged or violated and pulling just those back out into a simpler model.
I feel like that's mostly what's happened with the swing over into microservices and the swing back into monoliths, pulling some of the fundamentals encouraged by microservices back into monolith land but discarding all the other complexities that don't add anything.
I actually like close to a full microservice architecture model, once you allow them all to share the database (possibly through a shared API layer).
Why small orgs use microservices: makes it nearly physically impossible to do certain classes of dumb shit