Nothing about cajoling a model to write what you want it to is essential complexity in software dev.
In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.
Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.
Ah, a work of fiction.
We're witnessing a "horses to automobile" moment in software development. Programming, as a professional discipline, is going to be over in a year or two on the outside. We're getting the "end of software engineering in six months" before we're getting a real "year of the Linux desktop". Or GTA VI.
Don't get me wrong, "sort of works if you squint at it" is downright miraculous by the standards of five years ago, but current models and harnesses are not sufficient to replace developers at this scale.
what if AI is better at tackling essential complexity too?
You can reduce it by process re-engineering, by changing the requirements, by managing expectations. But not by programming.
If we get an LLM to manage the rest of the organisation, then conceivably we could get it to reduce the essential complexity of the programming task. But that's putting the cart before the horse - getting an LLM to rearrange the organisation processes so that it has less complexity to deal with when coding seems like a bad deal.
And complexity is one of the things we're still not seeing much improvement in LLMs managing. The common experience from people using LLM coding agents is that simple systems become easy, but complex systems will still cause problems with LLM usage. LLMs are not coping well with complexity. That may change, of course, but that's the situation now.
There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.
In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.
> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.
One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.
So you do not get horrific issues like this [0] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.
We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...