Ask HN: Is understanding code becoming "optional"?
3 points
1 hour ago
| 7 comments
| HN
On Twitter, Boris Cherny (creator of Claude Code) recently said that nearly 100% of the code in Claude Code is written by Claude Code, and that he personally hasn’t written code in months. Another tweet, from an OpenAI employee, went: "programming always sucked [...] and I’m glad it’s over."

This "good riddance" attitude really annoys me. It frames programming as a necessary evil we can finally be rid of.

The ironic thing is that I’m aiming for something similar, just for different reasons. I also want to write less code.

Less code because code equals responsibility. Less code because "more code, more problems." Because bad code is technical debt. Because bugs are inevitable. Less code because fewer moving parts means fewer things can go wrong.

I honestly think I enjoy deleting code more than writing it. So maybe it’s not surprising that I’m skeptical of unleashing an AI agent to generate piles of code I don’t have a realistic chance of fully understanding.

For me, programming is fundamentally about building knowledge. Software development is knowledge work: discovering what we don’t know we don’t know, identifying what we do know we don’t know, figuring out what the real problem is, and solving it.

And that knowledge has to live somewhere.

When someone says "I don’t write code anymore," what I hear is: "I’ve shoved the knowledge work into a black box."

To me there’s a real difference between:

- knowledge expressed in language (which AI can produce ad nauseam), and

- knowledge that solidifies as connections in a human mind.

The latter isn’t a text file. It isn’t your "skills" or "beads." It isn’t hundreds of lines of Markdown slop. No. It’s a mental model: what the system is, why it’s that way, what’s safe to change, what leverage the abstractions provide, and where the fragile assumptions lie.

I’ve always carried a mental model of the codebase I’m working in. In my head it’s not "code" in the sense of language and syntax. It’s more like a "mind palace" I can step into, open doors, close doors, renovate, knock down a wall, add a new wing. It happens at a level where intuition and intellect blend together.

I'm not opposed to progress. Lately, with everything going on, I’ve started dividing code into two categories:

- code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and

- code I can't help modelling in my head (business-critical, novel, experimental, or introduces new patterns).

I’m fine delegating the former to an AI agent. The latter is where domain knowledge and system understanding actually forms. That’s where it gets interesting. That’s the fun part. And my "mind palace" craves to stay in sync with it.

Is the emerging notion that understanding code is somehow optional something you are worried about?

nacozarina
1 minute ago
[-]
Have CC users been raving about rock-solid stability improvements, more insightful spending analytics, and overall quantum improvements in customer experience?

No, most of the chatter I’ve heard here has been the opposite. Changes have been poorly communicated, surprising, and expensive.

If he’s been vibe-coding all this and feeling impressed with himself, he’s smelling his own farts. The performance thus far has been ascientific, tone-deaf and piss-poor.

Maybe vibe-coding is not for him.

reply
cyrusradfar
1 hour ago
[-]
The metaphor I'd use is, can you understand the a story if you don't read it in the original language? Code is a language that describes the function.

I want to say, I've lived through the time (briefly) where folks felt if you didn't understand the memory management or, even assembly, level ops of code, you're not going to be able to make it great.

High level languages, obviously, are a counter-argument that demonstrate that you don't necessarily need to understand all the details to deliver an differentiable experience.

Personally, I can get pretty far with a high-level mental model and deeper model of key high-throughput areas in the system. Most individuals aren't optimizing a system, they're building on top of a core innovation.

At the core you need to understand the system.

Code is A language that describes it but there's others and arguably, in a lot of cases, a nice visual language goes much further for our minds to operate on.

reply
mikaelaast
1 hour ago
[-]
Yes, and I like the points you are making. I feel like the mental models we make are exercises in a purer form of knowledge building than the code artifacts we produce. A kind of understanding that is liberated from the confines of languages.
reply
dapperdrake
15 minutes ago
[-]
Many irrelevant difference between programming languages are now exposed for what they are.

Thinking clearly is just as relevant or encumbering as it always was.

reply
sinenomine
1 hour ago
[-]
If the AI provides 0-1 nines of reliability and you refuse to provide the rest of nines required by the customer, then who will provide these, and what is your role and claim to margin here?
reply
mikaelaast
1 hour ago
[-]
Creating work for the clean-up crew and leaving good money on the table for them (because it ain't gonna be cheap).
reply
tjr
1 hour ago
[-]
The "good riddance" attitude surprises me also. On one hand, it can be unpleasant to sort through obscure syntactical gobbledegook, like tracing around multiple levels of pointer indirection, but then again, I have found a certain enjoyable satisfaction in such things. It can be tough, but a good tough.

It does seem to me that the people who consistently get the best results from AI coding aren't that far away from the code. Maybe they aren't literally writing code any more, but still communicating with the LLM in terms that come from software development experience.

I think there will still be value in learning how to code, not unlike learning arithmetic and trigonometry, even if you ultimately use a calculator in real life.

But I think there will also still be value in being able to code even in real life. If you have to fix a bug in a software product, you might be able to fix it with more precise focus than an LLM would, if you know where to look and what to do, resulting in potentially less re-testing.

Personally, I balk at the idea of taking responsibility for shipping real software product that I (or, in a team environment, other humans on my team) don't understand. Perhaps that is my aerospace software background speaking -- and I realize most software is not safety-critical -- but I would be so much more confident shipping something that I understood how it worked.

I don't know. Maybe in time that notion will fade. As some are quick to point out, well, do you understand the compiled/assembled machine code? I do not. But I also trust the compilation process more than I trust LLMs. In aerospace, we even formally qualify tools like compilers to establish that they function as expected. LLM output, especially well-guided by good prompts and well-tested, may well be high quality, but I still lack trust in it.

reply
chrisjj
1 hour ago
[-]
Great question, but not specific to LLMs. Same applies to importing a C library.

Answer: no. Just harder.

reply
bediger4000
1 hour ago
[-]
That seems like exactly the wrong lesson to learn from LLM "AI". Under no circumstances does such an "AI" understand anything, much less important semantics, so human understanding becomes that much more important.

I realize that director level managers may not get this because they've always lived and worked in the domain of "vibes" but that doesn't mean it's not true

reply