It's also unclear whether tight coupling is actually a problem when you can refactor this fast.
> "These AI types are all delusional. My job is secure. Sure your model can one-shot a small program in green field in 5 minutes with zero debugging. But make it a little larger and it starts to forget features, introduces more bugs than you can fix, and forget letting it loose on large legacy codebases"
What if that's not a diagnosis? What if we see that as an opportunity? O:-)
I'm not saying it needs to be microservices, but say you can constrain the blast radius of an AI going oops (compaction is a famous oops-surface, for instance); and say you can split the work up into self-contained blocks where you can test your i/o and side effects thoroughly...
... well, that's going to be interesting, isn't it?
Programming has always supposed to be about that: Structured programming, functions (preferably side-effect-less for this argument), classes&objects, other forms of modularization including -ok sure- microservices. I'm not sold on exactly the latter because it feels a bit too heavy for me. But ... something like?
Interesting corollary: as context windows keep growing (8k to 1M+ in two years), this architectural pressure should actually reverse. When a model can hold your whole monolith in working memory, you get all the blast radius containment without the operational overhead of separate services, billing accounts, and deployment pipelines.
Microservices solve an organizational problem mostly — teams being able to work completely independently, do releases independently, etc — but as soon you’re going to actually do that, you’re introducing a lot of complexity (but gain organizational scalability).
This has nothing to do with context sizes.
In fact, my argument is that there will be more monolith applications due to AI coding assistants, not less.