Does coding with LLMs mean more microservices?
26 points
7 hours ago
| 7 comments
| ben.page
| HN
int_19h
16 minutes ago
[-]
That's an argument for components with well-defined contracts on their interfaces, but making them microservices just complicates debugging for the model.

It's also unclear whether tight coupling is actually a problem when you can refactor this fast.

reply
dist-epoch
2 minutes ago
[-]
You are taking the article argument too literally. They meant microservices also in the sense of microlibraries, etc, not strictly a HTTP service.
reply
nikeee
40 minutes ago
[-]
What matters for LLMs is what matters for humans, which usually means DX. Most Microservice setups are extremely hard to debug across service boundaries, so I think in the future, we'll see more architectural decisions that make sense for LLMs to work with. Which will probably mean modular monoliths or something like that.
reply
Kim_Bruning
8 minutes ago
[-]
A typical rant (composed from memory) goes something like this:

> "These AI types are all delusional. My job is secure. Sure your model can one-shot a small program in green field in 5 minutes with zero debugging. But make it a little larger and it starts to forget features, introduces more bugs than you can fix, and forget letting it loose on large legacy codebases"

What if that's not a diagnosis? What if we see that as an opportunity? O:-)

I'm not saying it needs to be microservices, but say you can constrain the blast radius of an AI going oops (compaction is a famous oops-surface, for instance); and say you can split the work up into self-contained blocks where you can test your i/o and side effects thoroughly...

... well, that's going to be interesting, isn't it?

Programming has always supposed to be about that: Structured programming, functions (preferably side-effect-less for this argument), classes&objects, other forms of modularization including -ok sure- microservices. I'm not sold on exactly the latter because it feels a bit too heavy for me. But ... something like?

reply
tatrions
1 hour ago
[-]
The bounded surface area insight is right, but the actual forcing function is context window size. Small codebase fits in context, LLM can reason end-to-end. You get the same containment with well-defined modules in a monolith if your tooling picks the right files to feed into the prompt.

Interesting corollary: as context windows keep growing (8k to 1M+ in two years), this architectural pressure should actually reverse. When a model can hold your whole monolith in working memory, you get all the blast radius containment without the operational overhead of separate services, billing accounts, and deployment pipelines.

reply
dist-epoch
1 minute ago
[-]
Large context windows cost more money. So the pressure is still there to keep it tight.
reply
stingraycharles
56 minutes ago
[-]
This makes no sense as you’re able to have similar interfaces and contracts using regular code.

Microservices solve an organizational problem mostly — teams being able to work completely independently, do releases independently, etc — but as soon you’re going to actually do that, you’re introducing a lot of complexity (but gain organizational scalability).

This has nothing to do with context sizes.

reply
siruwastaken
1 hour ago
[-]
This seems like the idea of modularizing code, and using specific function sighatures for data exchange as an API is being re-invented by people using AI. Aren't we already mostly doing things this way, albeit via submodules in a monolith, due to the cognitive ctrain it puts on humans to understand the whole thing at any given time?
reply
_pdp_
1 hour ago
[-]
This makes no sense. You can easily make a monolith and build all parts of it in isolation - i.e. modules, plugins, packages.

In fact, my argument is that there will be more monolith applications due to AI coding assistants, not less.

reply
c1sc0
1 hour ago
[-]
Why microservices when small composable CLI tools seem a better fit for LLMs?
reply
mrbungie
14 minutes ago
[-]
His argument is not about LLM tools but rather about which architecture is better suited for coding with LLMs.
reply