This "good riddance" attitude really annoys me. It frames programming as a necessary evil we can finally be rid of.
The ironic thing is that I’m aiming for something similar, just for different reasons. I also want to write less code.
Less code because code equals responsibility. Less code because "more code, more problems." Because bad code is technical debt. Because bugs are inevitable. Less code because fewer moving parts means fewer things can go wrong.
I honestly think I enjoy deleting code more than writing it. So maybe it’s not surprising that I’m skeptical of unleashing an AI agent to generate piles of code I don’t have a realistic chance of fully understanding.
For me, programming is fundamentally about building knowledge. Software development is knowledge work: discovering what we don’t know we don’t know, identifying what we do know we don’t know, figuring out what the real problem is, and solving it.
And that knowledge has to live somewhere.
When someone says "I don’t write code anymore," what I hear is: "I’ve shoved the knowledge work into a black box."
To me there’s a real difference between:
- knowledge expressed in language (which AI can produce ad nauseam), and
- knowledge that solidifies as connections in a human mind.
The latter isn’t a text file. It isn’t your "skills" or "beads." It isn’t hundreds of lines of Markdown slop. No. It’s a mental model: what the system is, why it’s that way, what’s safe to change, what leverage the abstractions provide, and where the fragile assumptions lie.
I’ve always carried a mental model of the codebase I’m working in. In my head it’s not "code" in the sense of language and syntax. It’s more like a "mind palace" I can step into, open doors, close doors, renovate, knock down a wall, add a new wing. It happens at a level where intuition and intellect blend together.
I'm not opposed to progress. Lately, with everything going on, I’ve started dividing code into two categories:
- code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and
- code I can't help modelling in my head (business-critical, novel, experimental, or introduces new patterns).
I’m fine delegating the former to an AI agent. The latter is where domain knowledge and system understanding actually forms. That’s where it gets interesting. That’s the fun part. And my "mind palace" craves to stay in sync with it.
Is the emerging notion that understanding code is somehow optional something you are worried about?
No, most of the chatter I’ve heard here has been the opposite. Changes have been poorly communicated, surprising, and expensive.
If he’s been vibe-coding all this and feeling impressed with himself, he’s smelling his own farts. The performance thus far has been ascientific, tone-deaf and piss-poor.
Maybe vibe-coding is not for him.
I want to say, I've lived through the time (briefly) where folks felt if you didn't understand the memory management or, even assembly, level ops of code, you're not going to be able to make it great.
High level languages, obviously, are a counter-argument that demonstrate that you don't necessarily need to understand all the details to deliver an differentiable experience.
Personally, I can get pretty far with a high-level mental model and deeper model of key high-throughput areas in the system. Most individuals aren't optimizing a system, they're building on top of a core innovation.
At the core you need to understand the system.
Code is A language that describes it but there's others and arguably, in a lot of cases, a nice visual language goes much further for our minds to operate on.
Thinking clearly is just as relevant or encumbering as it always was.
It does seem to me that the people who consistently get the best results from AI coding aren't that far away from the code. Maybe they aren't literally writing code any more, but still communicating with the LLM in terms that come from software development experience.
I think there will still be value in learning how to code, not unlike learning arithmetic and trigonometry, even if you ultimately use a calculator in real life.
But I think there will also still be value in being able to code even in real life. If you have to fix a bug in a software product, you might be able to fix it with more precise focus than an LLM would, if you know where to look and what to do, resulting in potentially less re-testing.
Personally, I balk at the idea of taking responsibility for shipping real software product that I (or, in a team environment, other humans on my team) don't understand. Perhaps that is my aerospace software background speaking -- and I realize most software is not safety-critical -- but I would be so much more confident shipping something that I understood how it worked.
I don't know. Maybe in time that notion will fade. As some are quick to point out, well, do you understand the compiled/assembled machine code? I do not. But I also trust the compilation process more than I trust LLMs. In aerospace, we even formally qualify tools like compilers to establish that they function as expected. LLM output, especially well-guided by good prompts and well-tested, may well be high quality, but I still lack trust in it.
Answer: no. Just harder.
I realize that director level managers may not get this because they've always lived and worked in the domain of "vibes" but that doesn't mean it's not true