I’m perpetually bamboozled by my fellow software engineering colleagues who insist on proudly shouting from the rooftops “Look at me, ma! I’m vibe coding!” as if it’s some badge of honor to see who can churn out the greatest quantities of shitcode the fastest and completely surrender any last scraps of their cognitive abilities to the best LLM provider of the current moment.
The difference is that I absolutely could write code by hand in notepad and upload them to a server via ftp if I had to
I think it is a safe bet that people who learn to code with AI agents are not going to have the skills to code without them
no
What do you think? 2/3 with 10 digits behind the dot in accuracy. Should that be 0.6666666666 or 0.6666666667 ? If I later add 1/3 to it my result is 1 which seems more correct?
Thanks for this, brother. This translated through the internets and punched me right in the feels.
I think this is a fantastic point well summarised. I see people coming out of the woodwork here on HN, especially when copyright is discussed in relation to LLMs, to say that there's no difference between human creativity and what LLMs do. (And therefore of course training LLMs on everything is fair use.) I'm not here to argue against that point of view, just to illustrate what this "message" means.
I feel fairly similar to Nolan and to this day haven't really started using LLMs in a major way in my work.
I do occasionally use it when I might have previously gone to Stack Overflow. Today I asked it a mildly tricky TypeScript generic wrangling question that ended up using the Extract helper type.
However, I'm also feeling the joy of coding isn't quite what it used to be as I move along in my career. I really feel great about finding the right architecture for a problem, or optimising something that used to be a roadblock for users until it's hardly noticeable. But so much work can just be making another form, another database table, etc. And I am always teetering back and forth between "just write the easy code (or get an AI to generate it!)" and "you haven't found the right architecture that makes this trivial".
However after 20 years in RL I’m glad that RL (MDP solving) is finally coming into view as the primary generalist intelligence framework.
People made fun of us RL people because the data/compute overhead but now that every robotic controller and system is becoming RL tuned it’s just going to chew its way through the whole computing stack eventually.
I see so many people in AI burning out because they see all these narrow methods failing to generalize.
Exciting times ahead.