But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
If we outsource the whole “hands that think” loop to agents, we may ship faster… but we also risk losing the embodied understanding that lets us explain why something is hard, where the edges are, and how to invent a better architecture instead of accepting “computer says no.”
I hope we keep making room for “luxury software”: not in price, but in care—the Swiss-watch mentality. Clean mechanisms, legible invariants, debuggable behavior, and the joy of building something you can trust and maintain for years. Hacker News needs more of that energy.
> By programming, they learn how the system fits together, where the limits are, and what is possible. From there they can discover new possibilities, but also assess whether new ideas are feasible.
Maybe I have a different understanding of "business context", but I would argue the opposite. AI tools allow me to spend much more time on the business impact of features, think of edge cases, talk with stakeholders, talk with the project/product owners. Often there are features that stakeholders dismiss that seemed complex and difficult in the past, but are much easier now with faster coding.
Code was almost never the limiting factor before. It's the business that is the limit.
When I wrote a paper in collaboration some time ago, it felt very weird to have large parts of the paper that I had superficial knowledge of (incidentally, I had retyped everything my co-author did, but in my own notation) but no profound knowledge of how it was obtained, of the difficulties encountered. I guess this is how people who started vibe coding must feel.
Are you sure you can't think of a commonly used operating system which doesn't?
Name ends with "ux", or maybe "BSD"?