LLMs code for you. They write for you.
"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.
However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.
If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.
In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.
If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.
For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?
Anyway, we've had machines that do our dishes and laundry for a long while now.
A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?
It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.
A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.
because ime, youre completely wrong.
I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.
to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas
And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes
LLMs defines paths, ideas, choose routes, analyze and so on. They don't just autocomplete. They create the entire poem.
Hard to define but feels similar to the "I know it when I see it" or "if it walks like a duck and quacks like a duck" definitions.
Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.
If you're doing anything UI-based, it hasn't performed well for me, but for certain areas of software development, it's been an absolute dream.
My place for that is in the shower.
I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.
People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.
I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.
The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?
AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.
Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?
My family eats out at a nice steak restaurant every Christmas no one wants to cook. None of us like to cook.
I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".
But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.
Ok time to work lmao
My role changes from coming up with solutions to babysitting a robotic intern. Not 100% of course. And of course an agent can be useful like 'intellisense on steroids'. Or an assistant who 'ripgreps' for me. There are advantages for sure. But for me the advantages don't match the disadvantages. LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.
I'm only half convinced the LLMs will become as important to coding as they seem . And I'm hoping a sane balance will emerge at the other end of the hype. But if it goes where OpenAI etc. want it to go I think I'll have to re-school to become an electrician or something...
i feel like that's all im doing with llms. just in the last hour i realized that i wanted an indexed string internpool instead of passing string literals. the LLM refactored everything and then i didn't have to worry about that lego piece anymore.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.
Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.
Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.
Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.
Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.
Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.
Education for educations sake isn't a thing.
It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.
I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.
3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.
Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.
The future is both really exciting and scary.
But it does save me time in many other aspects, so I can't complain.
I just wish I could have competent enough local LLMs and not rely on a company.
Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.
This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.
This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.
Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.
It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.
Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.
Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.
That is exactly the moment when you cannot say anything about the code and cannot fix single line by yourself.
If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)
As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.
We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.
Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…
AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.
Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.
The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…
I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.
Replacing Dockerfiles and Compose with CUE and Dagger
People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.
For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.
Web programming is not fun. Years ago, a colleague who had pivoted in the early years said "Web rots your brain" (we had done some cool work together in real time optical food sorting).
I know it (web programming) gives a lot of people meaning, purpose, and a paycheck, to become a specialist in an arcane art that is otherwise unplumbable by others. First it was just generally programming. But it's bifurcated into back end, front end, db, distributed, devops, meta, api, etc. The number of programmers I meet now days, who are at start ups that eventually "pivot" to making tools for the tool wielders is impressive (e.g. "we tried to make something for the general public, but that didn't stick, but on the way, we learned how to make a certain kind of pick axe and are really hoping we can get some institutional set of axe wielders at a big digging corporation to buy into what we're offering"). Instead of "Software is eating the world" the real story these days may be "Software is eating itself"
Mired with a mountain of complexity we've created as a result of years of "throw it at the wall and ship what sticks", we're now doubling down on "stochastic programming". We're literally, mathematically, embracing "this probab[i]l[it]y works". The usefulness/appeal of LLMs is an indictment and a symptom. Not a cause.
I'm constantly surprised by developers who like LLMs because "it's great for boiler plate". Why on earth were you wasting your time writing boiler plate before? These people are supposed to be programmers. Write code to generate the boiler plate or get abstract it away.
I suppose the path of least resistance is to ignore the complexity, let the LLM deal with it, instead of stepping back and questioning why the complexity is even there.
But, if you are in a work situation where LLM's are forced upon you in very high doses, then yes -- I understand the feeling.
Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.
It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.
In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
Turns out that a lot of code is fine with this. Some parts of the industry still have more stringent standards however.
This is the same situation we were in decades ago, just before ai, and still are
AI changes nothing about this statement, humans do not write prefect code
Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.
> Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.
You have citations of them explicitly making this claim on behalf of all SWEs in all domains/langs? I'd find that surprising, if so.
It's just a tool, misuse of the tool can very much not be fun. When it's forced on you, most things tend to not be fun.
But I am having lots of fun with LLMs, their application as well as assisting me with coding. What used to be a frustrating scour of the internet for solutions and examples is now a question away to an LLM. The not-so-fun things I used to dread, I'm letting the LLM tackle it. Most of the code I write could probably be written by an LLM, but I am choosing to write the code specifically because it is fun, and because maintaining LLM generated code is not so fun.
I think this is a case of people taking extremes. Extremes are usually not a good thing. Don't over use or over depend on LLMs, but use them with moderation, letting them do things they shine at. Don't create solutions that are looking for problems (with any tool, not just LLMs). Don't fall for the deceptive traps of nostalgia, or be stuck in "back in my day".
It goes both ways too, don't tell someone used to vim and nano to start using cursor.sh!
I like driving sports cars in the desert, very fun. I hate driving anything in traffic. It's all about context.
The internet can be very toxic at times, and very user-hostile at others. It can also be great. I like HN for example, as I'm sure many of you do. I don't like visiting gizmodo or some ad-trodden site, or toxic sub-reddits. LLMs are similar, there are and will be terrible LLM usages (truly, think of slaughterbots and LLMs being used by autonomous attack drones), but also fun and great usages.
Hand-holding an LLM cheats me of all these things, along with the uneasy feeling there is unexplored ordnance in there somewhere which will eventually go boom.
To each his or her own.
I think LLMs are fun. It does not get rid of the problem solving or troubleshooting or decision making. If anything, for me it completely resparked the hacker ethos in me. I got my start by being an idiot "script kitty" - so I am used to building bodged together things with code I only liminally understand.
There are so many new things I am trying and getting done that I feel like I am only limited by my creativity and my tolerance for risk.
I'm in the second camp, and I think the author is as well. For those of us, LLMs are kind of boring.
I wonder if customers even appreciate the organic artisanal labels that some sites are putting up e.g. https://play.date/games/diora/
Coding for me was always about the understanding and craftsmanship. The associated output and pay came as an adult, but that was never the point.
What’s not fun is the corpratization of AI. Being forced to use it even if it doesn’t make sense. Every project having to shove AI into it to get buy-in.
100%. The fun is in understanding, creating, exaplaining. Is not in typing, boilerplating, fixing missing imports, and API mismatch etc.
There's currently an enormous pressure on developers to pay lip service to loving AI tools. Expressing a differing opinion easily gets someone piled on for being outdated or not understanding things, from people who sometimes mainly do it to virtue-signal and perform their own branding exercise.
Open self-expression takes guts, and is hard to substitute for with AI assistance.
It’s amazing and scary. I was wondering how takeoff would look like and I’m living it for better or worse.
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
Recently, I have been experimenting more with Claude Code and 4.5 Opus and have had substantially more fun creating utterly bizarre projects that I suspect would have more frustration than fun implementing the normal way. It does still require brainpower to QA, identify problems, and identify potential fixes: it's not all vibes. The code quality, despite intuition, has no issues or bad code smells that is expected of LLM-generated code and with my approach actually runs substantially more performantly. (I'll do a full writeup at some point)
However - and maybe I'm just an easily entertained simpleton - I find them really fun for exploring those random, not trivially Google-able questions that pop into my head on a daily basis, technical and otherwise. Most of my chats with ChatGPT begin with questions of this form. I keep my critical thinking cap on during the dialogue and always verify the output if it's to be used for anything serious, but I'd be lying if I said I didn't enjoy the process.
Asking a prompt to do something is asking a prompt to do something.
In my case I fear the day comes where I can not program anymore and I have to give orders to a prompt.
>The joy of management is seeing my colleagues learn and excel, carving their own paths as they grow. Watching them rise to new challenges. As they grow, I learn from their growth; mentoring benefits the mentor alongside the mentee.
I fail to grasp how using LLMs precludes either of these things. If anything, doing so allows me to more quickly navigate and understand codebases. I can immediately ask questions or check my assumptions against anything I encounter.
Likewise, I don’t find myself doing less mentorship, but focusing that on higher-level guidance. It’s great that, for example, I can tell a junior to use Claude to explore X,Y, or Z design pattern and they can get their own questions answered beyond the limited scope of my time. I remember seniors being dicks to me in my early career because they were overworked or thought my questions were beneath them. Now, no one really has to encounter stuff like that if they don’t want to.
I’m not even the most AI-pilled person I know or on my team, but it just seems so staggeringly obvious how much of a force multiplier this stuff has become over the last 3-6 months.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.
1. LLMs inspire and clearly strike a certain type of chord in people that other technology does not. For instance, can you imagine a post called "Rust Is Not Fun" at the top of HN? Or even replace "Rust" with technology that has some fans and some haters, like "PHP Is Not Fun". Can you ever imagine that finding equivalent traction? Why would you even write a post called "Skateboarding Is Not Fun"? I just did a search across HN and the only other thing I saw being called not fun that actually got traction was... Twitter.
2. The post makes two points about why LLMs are wrong, and I (as someone who gets a lot of mileage out of LLMs) pretty strongly disagree with both.
a) You can't get better at using LLMs ("Nurturing the personal growth of an LLM is an obvious waste of time")
This seems almost objectively false to me? I have gotten substantially better at prompting and using LLMs after about 6 months of daily use. I think at a higher level of abstraction with LLMs than I would when working directly with code, and this is a different type of thinking that requires effort and practice to develop. An LLM doesn't solve all problems you could ever have, it just allows you to think at a higher level of abstraction.
Previously I might convert one 300 LoC file from JS to TS (or whatever) and call it a day. Now, in the same amount of time, I might do 10. But obviously just asking the LLM to commit that doesn't cut it because I might have broken something, so I need to think of some way to get the LLM to verify that I haven't broken anything, so maybe I'd first get it to build some unit tests to my specifications, or a couple of linter rules, or something else tailored to the problem at hand. This is the "thinking at a higher level of abstraction" and I tend to find it an interesting puzzle. It's a very different type of thinking then the "how am I going to refactor this single file", but it ends up being pretty enjoyable.
b) "For me, the joy of programming is understanding a problem in full depth"
I suspect that this is why a lot of people are frustrated with LLMs. I empathize here more than with part a). I mean, sure, I like digging into complex systems - it's fun and rewarding. But... and I feel this will be controversial, but I feel like this isn't really the most meaningful part of coding. Isn't the "complex problem solving" mostly a side-effect of the thing which is really the important thing (and the thing I like much more), which is delivering software that people enjoy and use? And while LLMs do a good bit of heavy lifting on the first, I don't find that they are capable at all of solving the second - which means that coding is just as fun for me as ever. In fact, probably more so, since I feel like I can iterate and build ideas faster for users.
Different people can find enjoyment and meaning in different parts of work, even if they do the same kind of work.
You enjoy delivering products to people. Other people might feel more enjoyment from the problem solving and understanding the system end to end.
It's not controversial for people to have different preferences. What might be controversial is saying that your preference is more correct / somehow "better" than someone else's preference.
If you’re letting the LLM do things you aren’t spending the time to understand in depth, you are shirking your professional responsibilities
Edit: I know, I know, blink 3 times to signal SOS. I clearly only wrote the above under duress and threats from my managers. There's simply nothing fun about interacting with an entity that would be the stuff of science fiction just 5 years ago, no sir!
I have become a general and a master of multitude of skeleton agents. my attention to the realm of managing effectively the unreproducible result of running the same incantations.
As the sailor through the waters of the coastline he have roamed plenty of times, the currents are there, yet the waves are new everyday.
Whatever limitation is removed, I should approach the market and test my creations swiftly and enrich myself, before the first legion of lich kings appear. they, better masters than I would ever be.
Absolutely disagree. I use LLM to speed up the process and ONLY accept code that I would write myself.
end of the day, guys like the author, for better or worse, are going to be replaced by the next generation of developers who don't care for the 'aesthetics' in the same way
People like this have a great deal to personally lose from LLMs. It makes them substantially less "special". Or so they think, but it is actually not true at all.
I think some of them resent having to level up again to stay relevant. Like when video games add more levels to a game you though you already beat. Fair enough, but such is life and natural competition.
When they come at LLMs with this attitude (gritting their teeth while prompting) it is no wonder they are grossly offended and disgusted by its outputs.
I've been tempted at times to hold these attitudes myself but my approach for now is to see how much I can learn about this tool and use it for as much as I can while tokens are subsidized. Either it all pops with the bubble or I have gained new, marketable skills. And no your hand coding skills don't just evaporate. In fact, I now I have a new found love of hand coding as a hobby since that part of my brain is no longer used up by the end of the day with coding tasks for Work.
For me, the fun part of programming is having the freedom to get my computer to do whatever I want. If I can't find a tool to do something, I can write it myself! That has been a magical feeling since I first discovered it all those years ago.
LLMs gives me the ability to do even more things I want, faster. I can conceptualize what I want to create, I can specify the details as much as I want, and then use an LLM to make it happen.
It is truly magical. I feel like I am programming in Star Trek, with the computer as an ally instead of as the a receptacle for my code.