You want microservices, but do you need them?
84 points
6 hours ago
| 25 comments
| docker.com
| HN
stoneman24
5 hours ago
[-]
I would really like to send this article out to all the developers in my small company (only 120+ people, about 40 dev & test) but the political path has been chosen and the new shiny tech has people entranced.

What we do (physics simulation software) doesn’t need all the complexity (in my option as a long time software developer & tester) and software engineering knowledge that splitting stuff into micro services require.

Only have as much complexity as you absolutely need, the old saying “Keep it simple, stupid” still has a lot of truth.

But the path is set, so I’ll just do my best as an individual contributor for the company and the clients who I work with.

reply
to11mtm
1 hour ago
[-]
IMO, Engineering mindset is a huge challenge when it comes to 'do you do microservices'

And by that, I mean that I have at times seen and/or perhaps even personally used as a cudgel - "This thing has a specific contract and it is implicitly separate and it forces people to remember that if their change needs to touch other parts well then they have to communicate it". In the real world sometimes you need to partition software enough that engineers don't get too far out of the boundaries one way or another (i.e. changes inadvertently breaking something else because they were not focused enough)

reply
randall
1 hour ago
[-]
but fr at facebook we just had unit tests. if someone else broke your code it’s your fault unless you have tests.

there are of course microservices for things like news feed etc, but iirc all of fb.com and mobile app graphql is from the monolith by default.

reply
walt_grata
5 hours ago
[-]
I started making the case for organizational efficiency rather than a technical argument. Demonstrating where the larger number of people and teams necessary to make a decision and a change and how that impacts the amount of time to ship new features has been more effective IME.
reply
venturecruelty
2 hours ago
[-]
This article shows up here once in a while, but it's a good read: https://softwarecrisis.dev/letters/tech-is-a-pop-culture/
reply
LtWorf
5 hours ago
[-]
I thought microservices were old by now, which is why this kind of articles are finally appearing.
reply
Nextgrid
1 hour ago
[-]
It's less about how old microservices are and more that ZIRP being over, there is now finally pressure to improve software development efficiency, and to a certain extent, optimize infrastructure costs. Developers are now riding the new wave.
reply
xnx
4 hours ago
[-]
I would really like to send this article back in time 11 years
reply
echelon
5 hours ago
[-]
If you have workloads with different shapes, microservices make sense.

If not, do the monolith thing as long as you can.

But if you're processing jobs that need hand off to a GPU, just carve out a service for it. Stop lamenting over microservices.

If you've got 100+ engineers and different teams own different things, try microservices. Otherwise, maybe keep doing the monolith.

If your microservice is as thin as leftpad.js and hosts only one RPC call, maybe don't do that. But if you need to carve out a thumbnailing service or authC/authZ service, that's a good boundary.

There is no "one size fits all" prescription here.

reply
soco
4 hours ago
[-]
I wonder, at which point is a service getting called microservice? The team-sized service advocated by the usual argument does not sound that "micro" to me - but is most of the times the right size.
reply
karmakaze
3 hours ago
[-]
The definitional size I've read and heard is that you team could (with the benefit of hindsight) be able to reimplement a microservice in 2 weeks. That sounds fairly extreme but a month seems within reason to me.

The other key difference between microservices and other architectures is that each microservice should do its primary function (temporarily) without hard dependencies, which basically means having a copy of the data that's needed. Service Oriented Architecture doesn't have this as one of its properties which is why I think of it as a mildly distributed monolith. "Distributed monolith" is the worst thing you could call a set of microservices--all the pain without the gains.

reply
ants_everywhere
1 hour ago
[-]
That's a pretty extreme definition in my opinion.

Google played a role in popularizing the microservice approach.

When I was at Google, a microservice would often be worked on with teams of 10-30 people and take a few years to implement. A small team of 4-5 people could get a service started, but it would often take additional headcount to productionize the service and go to market.

I have a feeling people overestimate how small microservices are and underestimate how big monorepos are. About 9 times out of ten when I see something called a monorepo it's for a single project as opposed to a repo that spans multiple projects. I think the same is true of microservices. Many things that Amazon or Google considers microservices might be considered monoliths by the outside world.

reply
xnx
4 hours ago
[-]
Good point. They would not have been as popular if they were called "multi-services" or "team partitioned apps".
reply
hosh
1 hour ago
[-]
It is not so black and white.

The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.

A single monolith can be deployed in different ways to handle different scalability requirements. For example, a distinct set of pods responding to endpoints for reports, another set for just websocket connections, and the remaining ones for the rest of the endpoints. Those can be independently scaled but released on the same cadence.

There was a long form article I once read that reasoned through this. Given M number of code sources, there are N number of deployables. It is the delivery system’s job to transform M -> N. M is based on how the engineering team(s) work on code, whether that is a monorepo, multiple repos, shared libraries, etc. N is what makes sense operationally . By making it the delivery system’s job to transform M -> N, then you can decouple M and N. I don’t remember the title of that article anymore. (Maybe someone on the internet remembers).

reply
Nextgrid
1 hour ago
[-]
> The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.

This ain't new. Any language supporting loading modules can give you the organization benefit of microservices (if you consider it a benefit that is - very few orgs actually benefit from the separation) while operating like a monolith. Java could do it 20+ years ago, just upload your .WAR files to an application server.

reply
rdtsc
1 hour ago
[-]
> Java could do it 20+ years ago, just upload your .WAR files to an application server.

Erlang could do it almost 40 years ago.

It can be used to upgrade applications at runtime without stopping the service. That works well in Erlang, it’s designed from the ground up for it. I know of a few places that used that feature.

reply
venturecruelty
1 hour ago
[-]
Erlang seems like a joy to use. I feel a slight pang of regret that I haven't (yet) gotten to use it in my career. (I don't quite have the time or energy to play with it during my off hours, but it is on my list for someday.)
reply
rapnie
50 minutes ago
[-]
You might give Gleam [0] a try, which is advertised as "language you can learn in a day". It is type-safe, supports the BEAM and you can easily invoke Erlang and Elixir. Compiles to Erlang or Javascript.

[0] https://gleam.run/

reply
venturecruelty
25 minutes ago
[-]
This looks delightful! Thanks for the recommendation!
reply
to11mtm
1 hour ago
[-]
I mean sure but one could also argue that VB6 can do the same analogue of Java so long as ASP is involved... And yes I've seen it done; you have a basic interface similar to an actor, but really more like a 'Take a key-value in for input, do processing on it, return a key-value to go to the next stage' and then the other necessary glue to handle that. The way the glue worked, it was minimal ceremony to get a new module in.

NGL It was clever enough that every few years I think about trying to redo the concept in another language....

reply
rdtsc
1 hour ago
[-]
Yup, good point on the BEAM. The joke we used when microservices were hot was that the BEAM is already ahead with nano-services: a gen_server is a nice lightweight, isolated process. You can define a callback API wrapper for it and deploy millions of them on a cluster.
reply
INTPenis
5 hours ago
[-]
I'm helping a company get out of legacy hell right now. And instead of saying we need microservices, let's start with just a service oriented architecture. That would be a huge step forward.

Most companies should be perfectly fine with a service oriented architecture. When you need microservices, you have made it. That's a sign of a very high level of activity from your users, it's a sign that your product has been successful.

Don't celebrate before you have cause to do so. Keep it simple, stupid.

reply
dragonwriter
4 hours ago
[-]
> And instead of saying we need microservices, let's start with just a service oriented architecture.

I think the main reason microservices were called “microservices” and not “service-oriented architecture” is that they were an attempt to revive the original SOA concept when “service-oriented architecture” as a name was still tainted by association to a perceived association with XML and the WS-* series of standard (and, ironically, often systems that supported some subset of those standards for interaction despite not really applying the concepts of the architectural style.)

reply
zbentley
1 hour ago
[-]
What characteristics define "legacy hell"?

I'm curious, and the specific list of problems and pain points (if--big if!--everyone there agrees what they are) can help more clearly guide the decisions as to what the next architecture should look like--SoA, monolithic, and so on.

reply
shoo
3 hours ago
[-]
Service oriented architecture seems like a pretty good idea.

I've seen a few regrettable things at one job where they'd ended up shipping a microservice-y design but without much thought about service interfaces. One small example: team A owns a service that runs as an overnight job making customer specific recommendations that get written to a database, and then team B owns a service that surfaces these recommendations as a customer-facing app feature and directly reads from that database. It probably ended up that way as team A had the data scientists and team B had the app backend engineers for that feature and they had to ship something and no architect or senior engineer put their foot down about interfaces.

That'd be pretty reasonable design if team A and team B were the same team, so they could regard the database as internal with no way to access it without going through a service with a well defined interface. Failing that, it's hard to evolve the schema of the data model in the DB without a well defined interface you can use to decouple implementation changes from consumers and where the consuming team B have their own list of quarterly priorities.

Microservices & alternatives aren't really properties of the technical system in isolation, they also depend on the org chart & which teams owns what parts of the overall system.

SOA: pretty good, microservices: probably not a great idea, microservices without SOA: avoid.

For anyone unfamiliar with SOA, there's a great sub-rant in Steve Yegge's 2011 google platforms rant [1][2] focusing on Amazon's switch to service oriented architecture.

[1] https://courses.cs.washington.edu/courses/cse452/23wi/papers... [2] corresponding HN thread from 2011 https://news.ycombinator.com/item?id=3101876

reply
rockemsockem
5 hours ago
[-]
You need multiple services whenever the scaling requirements of two components of your system are significantly different. That's pretty much it. These are often called micro services, but they don't have to actually be "micro"
reply
roncesvalles
2 hours ago
[-]
That's the most nonsensical reason to adopt microservices imo.

Consider this: every API call (or function call) in your application has different scaling requirements. Every LOC in your application has different scaling requirements. What difference does it make whether you scale it all "together" as a monolith or separately? One step further, I'd argue it's better to scale everything together because the total breathing room available to any one function experiencing unusual load is higher than if you deployed everything separately. Not to mention intra- and inter-process comm being much cheaper than network calls.

The "correct" reasons for going microservices are exclusively human -- walling off too much complexity for one person or one team to grapple with. Some hypothetical big brain alien species would draw the line between microservices at completely different levels of complexity.

reply
venturecruelty
2 hours ago
[-]
At this point, I'm convinced that too many people simply haven't built software in a way that isn't super Kubernetes-ified, so they don't know that it's possible. This is the field where developers think 32 GB of RAM isn't enough in their laptop, when we went to the moon with like... 4K. There is no historical or cultural memory in software anymore, so people graduate not understanding that you can actually handle 10,000 connections per second now on a five-year-old server.
reply
Nextgrid
1 hour ago
[-]
Many developers started their career during the ZIRP era where none of the typical constraints of "engineering" (cost control, etc) actually applied and complexity & high cloud bills were seen as a good thing, so no wonder.
reply
lijok
1 hour ago
[-]
You’re focusing on the theoretical and ignoring cost. That’s incompetent engineering.
reply
siliconwrath
1 hour ago
[-]
Another case I’ve seen is to separate a specific part of the system which has regulatory or compliance requirements that might be challenging to support for the rest of the larger system, eg HIPAA compliance, PCI compliance, etc.

(To clarify, I’m not disagreeing with you!)

reply
zmmmmm
2 hours ago
[-]
It has other advantages.

Operationally, it is very nice to be able to update one discrete function on its own in a patch cycle. You can try to persuade yourself you will pull it off with a modular monolith but the physical isolation of separate services provides guarantees that no amount of testing / review / good intentions can.

However, it's equally an argument for SOA as it is for microservices.

reply
twodave
5 hours ago
[-]
I came here to say the same. If you’re arguing either for or against microservices you’re probably not thinking about the problem correctly. Running one big service may make sense if your resource needs are pretty uniform. Even if they’re not you need to weight the cost of adding complexity vs the cost of scaling some things prematurely or unnecessarily. Often this is an acceptable precursor to splitting up a process.
reply
Nextgrid
1 hour ago
[-]
You can still horizontally scale a monolith and distribute requests equally or route certain requests to certain instances; the only downside is that those instances would technically waste a few hundred MBs of RAM holding code for endpoints they will never serve; however RAM is cheap compared to the labor cost of a microservices environment.
reply
the__alchemist
5 hours ago
[-]
On the theme of several other responders:

I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.

reply
shoo
4 hours ago
[-]
Probably works OK for a small project with a close knit team of skilled contributors where there's some well defined structure and everyone has sufficient high level understanding of that structure to know what kinds of dependencies are or are not healthy to have.

But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change, if shared memory makes it easy for folks to add direct dependencies between data structures of different components that shouldn't be coupled.

reply
default-kramer
55 minutes ago
[-]
> But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change

You are describing the "microservice architecture" that I currently loathe at my day job. Fans of microservices would accurately say "well that's not proper microservices; that's a distributed monolith" but my point is that choosing microservices does not enforce any kind of architectural quality at all. It just means that all of your mistakes are now eternally enshrined thanks to Hyrum's Law, rather than being private/unpublished functions that are easy to refactor using "Find All References" and unit tests.

reply
Nextgrid
1 hour ago
[-]
> through some kind of well defined interfaces

Every compiled language has the concept of "interfaces", and can load even compiled modules/assemblies if you insist on them being built separately.

The compiler will enforce interface compliance much better than hitting untyped JSON endpoints over a network.

reply
jeltz
2 hours ago
[-]
Video games are very successfully built by huge teams as monoliths. As are some big open source projects like Linux and PostgreSQL.
reply
ErroneousBosh
5 hours ago
[-]
I love the idea that I can compile all my functionality including HTML templates, javascript, and CSS into a single albeit huge Golang binary.

I have never done this yet.

But I love the idea of it.

reply
hosainnet
5 hours ago
[-]
You can already do this with Deno Compile

https://deno.com/blog/deno-compile-executable-programs

reply
SatvikBeri
5 hours ago
[-]
I loved uberjars back when I was writing Scala. I don't miss much about the JVM, but I really miss having a single executable I could just upload and run without having to pay attention to the environment on the host machine.
reply
cogman10
4 hours ago
[-]
That's essentially the role docker serves now. Everything you need to run in 1 single image.
reply
SatvikBeri
4 hours ago
[-]
Yeah, but building a docker image tends to be a lot heavier weight and slower, in my experience, than uploading a single jar
reply
cogman10
3 hours ago
[-]
Heavier weight? Yes. Slower? Should be the same performance. Unless you are on a non-linux host, then there is no docker penalty.

The only time I can think where a JVM might be faster is if you have a multi-tenant setup. In that case, the JVM can be more effective with the GC vs having multiple JVMs running.

reply
c-hendricks
4 hours ago
[-]
This is what I'm doing with my side project! A set of personal smart picture frames for me and my partner. One executable does:

- the uploader API

- the uploader UI

- the frame API

- the frame UI

UIs are SSG'd with solid-js and solid-start then served with gin.

It's really fun.

reply
bb88
3 hours ago
[-]
Great nick!
reply
p1necone
5 hours ago
[-]
I like goldilocks services, as big or as small as actually makes sense for your domain/resource considerations, usually no single http endpoint services in sight.
reply
cogman10
4 hours ago
[-]
Once upon a time, that's what a microservice was. A monolith was the company's software all in one software package.

I think what changed things is FAAS came along and people started describing nanoservices as microservices which created really dumb decisions.

I've worked on a true monolith and it wasn't fun. Having your change rolled back because another team made a mistake and it was hard to isolate the two changes was really rough.

reply
venturecruelty
2 hours ago
[-]
I don't want or need microservices. What I want is for people to stop putting TCP roundtrips in between what would otherwise be simple function calls in a sane universe. I don't want to have to take a graduate-level course on the CAP theorem to clock in and work on whatever "Uber for dogs" nonsense is paying my rent. You almost certainly don't have a scaling problem that necessitates a distributed system, I guarantee it. I have had an average career, and every single time someone shoved a Kubernetes-shaped peg into a server-shaped hole, it's been a shitshow. These systems are slow, expensive, difficult to reason about, and largely unnecessary for most people who handle a few hundred or thoudsand connections per second (on average. Don't @ me about bursty traffic, I understand how it works).

And in a few days, we're going to get a long thread about how software is slow and broken and terrible, and nobody will connect the dots. Software sucks because the way we build it sucks. I've had the distinct privilege of helping another team support their Kubernetes monstrosity, which shat the bed around double-digit requests per second, and it was a comedy of errors. What should've otherwise just been some Rails or Django application with HTML templating and a database was three or four different Kubernetes pods, using gRPC to poorly and unnecessarily communicate with each other. It went down all. The. Time. And it was a direct result of the unnecessary complexity of Kubernetes and the associated pageantry.

I would also like to remind everyone that Kubernetes isn't doing anything your operating system can't do, only better. Networking? Your OS does that. Scheduling? Your OS does that. Resource allocation and sandboxing? If your OS is decent, it can absolutely do that. Access control? Yup.

I can confidently say that 95% of the time, you don't need Kubernetes. For the other 5%, really look deep into your heart and ask yourself if you actually have the engineering problems that distributed systems solve (and if you're okay with the other problems distributed systems cause). I've had five or six jobs now that shoehorned Kubernetes into things, and I can confidently say that the juice ain't worth the squeeze.

reply
Nextgrid
1 hour ago
[-]
> I don't want to have to take a graduate-level course on the CAP theorem

It would be a blessing if people actually did that, because then they'd avoid useless distributed systems.

> using gRPC to poorly and unnecessarily communicate

At least you've had the blessing of it being gRPC and not having to manually write JSON de/serializers by hand.

> Kubernetes isn't doing anything your operating system can't do

Kubernetes is good if you need to orchestrate across multiple machines. This of course requires an actual need for multiple machines. If you're doing so with underpowered cloud VMs (of which you waste a third of the RAM on K8s itself), just get a single bigger VM and skip K8s.

reply
lijok
1 hour ago
[-]
> your OS does that

Which one?

reply
venturecruelty
55 minutes ago
[-]
Any sane one.
reply
lijok
53 minutes ago
[-]
Which ones are the sane ones?
reply
8f2ab37a-ed6c
5 hours ago
[-]
It's funny that we've now been having this conversation on HN for at least a decade.
reply
eternityforest
5 hours ago
[-]
I don't want microservices, I think what I really want is self contained WebAssembly modules!
reply
kaladin-jasnah
5 hours ago
[-]
What's the performance trade-off of something like this over containerization? I have heard of an operating system that runs WASM (https://github.com/JonasKruckenberg/k23).
reply
ethanwillis
5 hours ago
[-]
highly depends on the wasm runtime we're running things on. I haven't seen any good recent benchmarks (as in the past few years). But, if I remember right wasmer is putting together some and trying to automate the results for them.
reply
rao-v
5 hours ago
[-]
Unironically this
reply
karmakaze
3 hours ago
[-]
There's one thing I've learned about microservices. If you've ever gone down the path of making them, failing and making them again until they all worked as they should with the desired 9's of uptime, then you'll only want to make them if it's really the right thing to make. It's not worth the effort otherwise.

So no I don't want microservices (again), but sometimes it's still the right thing.

reply
dzonga
5 hours ago
[-]
microservices were an effect of the ZIRP era. you literally have places like Monzo bragging that they've 3 microservices for each engineer.

3 tier architecture proves time and time again to be robust for most workloads.

reply
LaurensBER
5 hours ago
[-]
1 micro-service per pizza sized team seems to work pretty well.

Put it into a monorepo so the other teams have visibility in what is going on and can create PRs if needed.

reply
LtWorf
5 hours ago
[-]
Uh? You eat less than a pizza per person?
reply
SiempreViernes
5 hours ago
[-]
To be fair pizzas are quite easy to scale from small kid sizes up to enough for several persons.

But it is a bit sad that the poster apparently never bought a pizza just for themselves.

reply
LaurensBER
3 hours ago
[-]
It's a reference to Amazon's statement that teams should never grow larger than a team that you can feed with 2 (large) pizza's.

The optimum is probably closer to 1 than to 2.

reply
hackpelican
5 hours ago
[-]
8x engineer
reply
Uvix
4 hours ago
[-]
Certainly no more than three tiers.

"Traditional" three-tier, where you have a web server talking to an application server talking to a database server, seems like overkill; I'd get rid of the separate application tier.

If your tiers are browser, web API server, database: then three tiers still makes sense.

reply
vb-8448
5 hours ago
[-]
in my opinion "you need microservices" peaked around 2018-2019 ... does nowadays someone think that, apart from when you reach certain limits and specific contexts, they are a good idea?
reply
soco
4 hours ago
[-]
Half of the jobs I'm applying to have microservices in the description, much more often than, say, REST or Boot, so somebody definitely thinks they're a general solution to something.
reply
Nextgrid
1 hour ago
[-]
Microservices is an excellent generator for developer busywork and increased headcount. Busywork benefits the developers, increased headcount benefits their manager, and so on.
reply
mjr00
5 hours ago
[-]
I feel like this has been beaten to death and this article isn't saying much new. As usual the answer is somewhere in the middle (what the article calls "miniservices"). Ultimately

1. Full-on microservices, i.e. one independent lambda per request type, is a good idea pretty much never. It's a meme that caught on because a few engineers at Netflix did it as a joke that nobody else was in on

2. Full-on monolith, i.e. every developer contributes to the same application code that gets deployed, does work, but you do eventually reach a breaking point as either the code ages and/or the team scales. The difficulty of upgrading core libraries like your ORM, monitoring/alerting, pandas/numpy, etc, or infrastructure like your Java or Python runtime, grows superlinearly with the amount of code, and everything being in one deployed artifact makes partial upgrades either extremely tricky or impossible depending on the language. On the operational and managerial side, deployments and ownership (i.e. "bug happened, who's responsible for fixing?") eventually get way too complex as your organization scales. These are solvable problems though, so it's the best approach if you have a less experienced team.

3. If you're implementing any sort of SoA without having done it before -- you will fuck it up. Maybe I'm just speaking as a cynical veteran now, but IMO lots of orgs have keen but relatively junior staff leading the charge for services and kubernetes and whatnot (for mostly selfish resume-driven development purposes, but that's a separate topic) and end up making critical mistakes. Usually some combination of: multiple services using a shared database; not thinking about API versioning; not properly separating the domains; using shared libraries that end up requiring synchronized upgrades.

There's a lot of service-oriented footguns that are much harder to unwind than mistakes made in a monolithic app, but it's really hard to beat SoA done well with respect to maintainability and operations, in my opinion.

reply
SatvikBeri
5 hours ago
[-]
Re 1: I like Matt Ranney's take on it, where he says microservices are a form of technical debt – they let you deploy faster and more independently in exchange for an overall more complex codebase.

This makes it clear when you might want microservices: you're going through a period of hypergrowth and deployment is a bigger bottleneck than code. This made sense for DoorDash during covid, but that's a very unusual circumstance

reply
yowlingcat
5 hours ago
[-]
I see a lot of value in spinning up microservices where the database is global across all services (and not inside the service) but I struggle more to see the value of separate core transactional databases for separate services unless/until the point where two separate parts of the organizations are almost two separate companies that cannot operate as a single org/single company. You lose data integrity, joining ability, one coherent state of the world, etc.

The main time I can see this making sense is when the data access patterns are so different in scale and frequency that they're optimizing for different things that cause resource contention, but even then, my question would become do you really need a separate instance of the same kind of DB inside the service, or do you need another global replica/a new instance of a new but different kind of DB (for example Clickhouse if you've been running Postgres and now need efficient OLAP on large columnar data).

Once you get to this scale, I can see the idea of cell-based architecture [1] making sense -- but even at this point, you're really looking at a multi-dimensionally sharded global persistence store where each cell is functionally isolated for a single slice of routing space. This makes me question the value of microservices with state bound to the service writ large and I can't really think of a good use case for it.

[1] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...

reply
mjr00
4 hours ago
[-]
> I see a lot of value in spinning up microservices where the database is global across all services (and not inside the service)

This issue with this is schema evolution. As a very simple example, let's say you have a User table, and many microservices accessing this table. Now you want to add an "IsDeleted" column to implement soft deletion; how do you do that? First you need to add the actual column to the database, then you need to go update every single service which queries that table and ensure that it's filtering out IsDeleted=True, deploy all those services, and only then can you actually start using the column. If you must update services in lockstep like this, you've built a distributed monolith, which is all of the complexity of microservices with none of the benefits.

A proper service-oriented way to deal with this is have a single service with control of the User table and expose a `GetUsers` API. This way, only one database and its associated service needs to be updated to support IsDeleted. Because of API stability guarantees--another important guarantee of good SoA--other services will continue to only get non-deleted users when using this API, without any updates on their end.

> You lose data integrity, joining ability, one coherent state of the world, etc.

You do lose this! And it's one of the tradeoffs, and why understanding your domain is so important for doing SoA well. For subsets of the domain where data integrity is important, it should all be in one database, and controlled by one service. For most domains, though, a lot of features don't have strict integrity requirements. As a concrete though slightly simplified example, I work with IoT time-series data, and one feature of our platform is using some ML algorithms to predict future values based on historical trends. The prediction calculation and storage of its results is done in a separate service, with the results being linked back via a "foreign key" to the device ID in the primary database. Now, if that device is deleted from the primary database, what happens? You have a bunch of orphaned rows in the prediction service's database. But how big of a deal is this actually? We never "walk back" from any individual prediction record to the device via the ID in the row; queries are always some variant of "give me the predictions for device ID 123". So the only real consequence is a bit of database bloat, which can be resolved via regularly scheduled orphan checking processes if it's a concern.

It's definitely a mindshift if you're used to a "everything in one RDBMS linked by foreign keys" strategy, but I've seen this successfully deployed at many companies (AWS, among others).

reply
effnorwood
33 minutes ago
[-]
No
reply
gnarlouse
1 hour ago
[-]
I think most of the time when small teams say “we should do microservices” what they really mean is “we should try a service oriented architecture.” Especially if you’re doing a monorepo, it becomes fairly routine to make choices around how to consolidate like modules.

For example, I work in a small company with a data processing pipeline that has lots of human in the loop steps. A monolith would work, but a major consideration with it being a small company is cloud cost, and a monolith would mean slow load times in serverless or persistent node costs regardless of traffic. A lot of our processing steps are automated and ephemeral, and across all our customers, the data tends to look like a wavelet passing through the system with an average center of mass mostly orbiting around a given step. A service oriented architecture let us:

- Separate steps into smaller “apps” that run on demand with serverless workers.

- avoid the scaling issues of killing our database with too many concurrent connections by having a single “data service”—essentially organizing all the wires neatly.

- ensure that data access (read/write on information extracted from our core business objects) happens in a unified manner, so that we don’t end up with weird, fucky API versioning.

- for the human in the loop steps, data stops in the job queue at a CRUD app as a notification, where data analysts manually intervene.

A monolith would have been an impedance mismatch for the inherent “assembly line” model here, regardless of dogma and the fact that yes, a monolith could conceivably handle a system like this without as much network traffic.

You could argue that the data service is a microservice. It’s a single service that serves a single use case and guards its database access behind an API. I would reply to any consternation or foreboding due to its presence in a small company by saying “guess what, it works incredibly well for us. Architecture is architecture: the pros and cons will out, just read them and build what works accordingly.”

reply
tdhz77
5 hours ago
[-]
I found a different benefit to micro services — AI understands them and context matters. Monolithic app confuse ai where micro services enables them to be far more effective.
reply
zmmmmm
1 hour ago
[-]
It's an interesting question how AI influences this. If it scales up the scope of what an individual engineer can do, and if the primary driver of microservice scope is Conway's law, then in theory microservices should get "fatter".

However I go the other way than you: I have found AI needs as much context as possible and that means it understands monoliths (or fatter architectures) better. At least, the agentic style approach where it has access to the whole git tree / source repository. I find things break down a lot when changes are needed across source repositories.

reply
makapuf
2 hours ago
[-]
That's a benefit for monoliths, then.
reply
moltar
3 hours ago
[-]
I don’t want or need microservices.

I want just services.

reply
scuff3d
5 hours ago
[-]
The other problem is that very very few people actually know how to design a microservice based architecture. I've worked with half a dozen different teams who claim they're building microservices, but when you look at the system it's just a giant distributed monolith. Most of them are people who worked in legacy code bases, and while they like the idea of microservices, they can't let go of those design patterns. So they do the exact same thing but just out everything behind network calls. Drives me absolutely fucking nuts
reply
lysace
5 hours ago
[-]
We've removed/merged most of the unnecessary services. The ones left have operational needs to stay separate.

The current hell is x years of undisciplined (in terms of perf and cost) new ORM code being deployed (SQLAlchemy). We do an insane number of read queries per second relative to our usage.

I honestly think the newish devs we have hired don't understand SQL at all. They seem to think of it as some arcane low level thing people used in the 80s/90s.

reply
AJRF
5 hours ago
[-]
Another good use case for a microservice - if you are going to have to change the compute size for your monolith just to accommodate the new functionality.

I had an architect bemoan the suggestion we use a microservice, until he had to begrudgingly back down when he was told that the function we were talking about (Running a CLIP model) would mean attaching a GPU to every task instance.

reply
ForOldHack
5 hours ago
[-]
During a major site rewrite, one of my junior cohorts, suggested a monolithic re-entrant site... Easily tripled the TPS, and halved the response time.

I was stunned... He comes up with this stuff all the time. Thanks Matt.

reply
awesome_dude
5 hours ago
[-]
We watched kernels go from monoliths to micro to hybrid.

And, now, SAAS is finally making the jump to the last position - hybrid/mini

reply
callamdelaney
5 hours ago
[-]
Usually no
reply
cyberax
5 hours ago
[-]
I don't want microservices!

What I want is a lightweight infrastructure for macro-services. I want something to handle the user and machine-to-machine authentication (and maybe authorization).

I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.

You should be able to spin up everything localy in a docker-compose container.

reply
LaurensBER
5 hours ago
[-]
> What I want is a lightweight infrastructure for macro-services. I want something to handle the user and machine-to-machine authentication (and maybe authorization).

> I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.

K8s makes sense if you have a dedicated team (or atleast engineer) and if you really need need the advanced stuff (blue/green deployments, scaling, etc). Once it's properly setup it's actually a very pleasant platform.

If you don't need that Docker (or preferable Podman) is indeed the way to go. You can actually go quite far with a VPS or a dedicated server these day. By the time you outgrow the most expensive server you can (reasonable) buy you can probably afford the staff to roll out a "big boy" infrastructure.

reply
cyberax
5 hours ago
[-]
I tried to use K8s several times, and I just can't make it work. It's fine as a deployment platform, but I just can't justify its complexity for local development.

We're using Docker/Podman with docker-compose for local development, and I can spin up our entire stack in seconds locally. I can attach a debugger to any component, or pull it out of the Docker and just run it inside my IDE. I even have an optional local Uptrace installation for OTEL observability testing.

My problem is that our deployment infrastructure is different. So I need to maintain two sets of descriptions of our services. I'd love a solution that would unify them, but so far nothing...

reply
stackskipton
5 hours ago
[-]
I wouldn't use K8s for local development unless you have some system where there is a dev cluster and you can route traffic for particular pod to your local workstation.

Docker Compose for local development is fine. If your K8s setup is crazy complex that you need to test it locally, please stop.

reply
fragmede
4 hours ago
[-]
Tilt? Skaffold? configuration isn't free, but a debugger on a local k8 cluster that's at least somewhat representative of prod is pretty handy once you do.
reply
cyberax
4 hours ago
[-]
I tried Tilt, but it's still too complicated. For example, we have a computer vision model that is a simple Python service. When developing on macOS, it's not possible to use GPUs inside containers, so we need to run it locally on the host.

It's trivial with my current setup, but not really possible with Tilt.

reply
ghthor
4 hours ago
[-]
I know this pain, though we’re running nomad not k8s as our cluster control plane. But local devs are stuck with docker-compose, so 2 different configurations for running locally versus in the production environment.
reply
rahen
5 hours ago
[-]
Unless you need horizontal scalability or clustering, Compose + Terraform is all you need.

With Compose, you get proper n-tier application containerization with immutability. By adding an infrastructure-as-code tool such as Terraform to abstract your IT environment, you can deploy your application on-premises, in the cloud, or at a customer site with a single command.

For clustering needs, there’s Incus, and finally, Kubernetes for very fast scalability (<mn), massive deployments on large clusters, cloud offloading, and microservices.

Almost nobody truly needs the complexity of Kubernetes. The ROI simply isn’t there for the majority of use cases.

reply
honkycat
5 hours ago
[-]
The one thing I would like to preserve from microservices is stuff about database table hygiene.

Large, shared database tables have been a huge issue in the last few jobs that I have had, and they are incredibly labor intensive to fix.

reply
davnicwil
5 hours ago
[-]
In my experience basically everything being good in software is downstream of good data modelling.

It's partly why I've realised more over time that learning computer science fundamentals actually ends up being super valuable.

I'm not talking about anything particularly deep either, just the very fundamentals you might come across in year one or two of a degree.

It sort of hooks back in over time as you discover that these people decades ago really got it and all you're really doing as a software engineer is rediscovering these lessons yourself, basically by thinking there's a better way, trying it, seeing it's not better, but noticing the fundamentals that are either being encouraged or violated and pulling just those back out into a simpler model.

I feel like that's mostly what's happened with the swing over into microservices and the swing back into monoliths, pulling some of the fundamentals encouraged by microservices back into monolith land but discarding all the other complexities that don't add anything.

reply
asdfman123
5 hours ago
[-]
I learned this early and as a result I'm the guy who's trying to clean up other people's crap while they ship features and get promoted
reply
zmmmmm
2 hours ago
[-]
Heh, split databases is the thing I think is most problematic and the first thing I would eliminate in most microservices architectures. A huge fraction of the problems of microservices come from trying to split the data model along team structure / service domain rather than the true application / underlying business domain. It doesn't mean you shouldn't have multiple database, just the concept of splitting them arbitrarily along service lines is a huge cause of friction / impedance mismatch / overhead.

I actually like close to a full microservice architecture model, once you allow them all to share the database (possibly through a shared API layer).

reply
fny
5 hours ago
[-]
A databases is a global variable in disguise.
reply
asdfman123
5 hours ago
[-]
Why big orgs use microservices: makes teams focused on a clear problem domain

Why small orgs use microservices: makes it nearly physically impossible to do certain classes of dumb shit

reply
devmor
5 hours ago
[-]
I feel that if you have multiple sets of application logic that need to access the same data, there should be an internal API between them and the database that keeps that access to spec.
reply
mlfreeman
5 hours ago
[-]
Only allow clients to execute stored procedures?
reply
devmor
1 hour ago
[-]
That could certainly be one way to handle it, if your specific problem domain supports that logic being in stored procedures.
reply