Wirth's Revenge
62 points
6 hours ago
| 5 comments
| jmoiron.net
| HN
xnorswap
5 minutes ago
[-]
An interesting article and it was refreshing to read something that had absolutely no hallmarks of LLM retouching or writing.

It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.

Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?

reply
satvikpendem
1 hour ago
[-]
While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications.

For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!

[0] https://youtu.be/LvW1HTSLPEk

reply
elliotec
50 minutes ago
[-]
Yeah, I think a lot of this can be attributed to institutional and infrastructural inertia, abstraction debt, second+-order ignorance, and narrowing of specialty. People now building these things are probably good enough at React etc. to do stuff that needs to be done with it almost anywhere, but their focus needs to be ML.

The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.

reply
Cthulhu_
30 minutes ago
[-]
It makes me wish more graphics programmers would jump over to application development - 16.7ms is a huge amount of time for them, and 60 frames per second is such a low target. 144 or bust.
reply
jacquesm
7 minutes ago
[-]
And embedded too. But then again, they do what they do precisely because in that environment those skills are appreciated, and elsewhere they are not.
reply
nickm12
22 minutes ago
[-]
I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work.

Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?

I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.

reply
iberator
50 minutes ago
[-]
Dull article with no point, numbers or anything of values. Just some quasi philosophical mumbing. Wasted like 10 minutes and I'm still not sure what was the point of the article
reply
pseudony
36 minutes ago
[-]
Have you considered that the article might be fine, but it’s more a case of you not getting the point ?
reply
jokoon
18 minutes ago
[-]
Hardware is cheaper than programmers

Maybe one day that will change

reply