That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.
This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.
We sold out the culture, which paved the way for it to be hollowed out by LLMs.
There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
In short, it's time to wake up.
Mostly I agree with you. But there's a large group of people who are way too contemptuous of craftsmen using AI. We need to push back against this arrogant attitude. Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?
Yes, I know about the language / IDE / OS wars that software folks have indulged in before. But the reflexive shallow pro/anti takes on AI are way more extreme and are there even in otherwise serious people. And in general anti-intellectual sentiment, mindless follow-the-leader, and proudly ignorant stances on many topics are just out of control everywhere and curiosity seems to be dead or dying.
You can tell it's definitely tangled up with money though and this remains a good filter for real curiosity. Math that's not maybe related to ML is something HN is guaranteed to shit on. No one knows how to have a philosophy startup yet (WeWork and other culty scams notwithstanding!). Authors, readers, novels, and poetry aren't moving stock markets. So at least for now there's somewhere left for the intellectually curious to retreat
If anything, the AI takes are more much more meaningful. A Mac/PC flame war online was never going to significantly affect your career. A manager who either is all-in on AI or all-out on it can.
(It will be interesting to see many people are able to accurately draw the conceptual line between what part of my sentiment is a joke and what part is serious.)
I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.
Any person who posts a sufficiently long text online will be mistaken for an AI.
The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.
For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?
By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?
Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.
We are angry because we grow up in an age that content are generated by human and computer bot are inefficient. however, for newer generation, AI generated content will be a new normal, like how we see people from a big flat box (TV)
It's a message that's actually pretty relevant in an age of AI slop.
Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.
In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.
>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves
To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".
I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.
... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.
... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.
I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.
Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.
I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.
We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.
Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.
He's pigenholed at the same low pay rate and can't ever get a raise, until everyone in the same role also gets a raise (which will never happen). It traps people, because many union jobs can't or won't innovate, and when they look elsewhere, are underskilled (and stuck).
You mention 'deepseek'. Are you joking? It's owned by the Chinese government..and you claim to hate fascism? Lol?
Big companies only have the power now, because the processing power to run LLMs is expensive. Once there are break throughs, anyone can have the same power in their house.
We have been in a tech slump for awhile now. Large companies will drive innovations for AI that will help everyone.
Deepseek is open source, which is why I mention it. It was made by the Chinese government but it shows a way to create these models at vastly reduced cost and was done with transparent methodology so we can learn from it. I am not saying "the future is Deepseek", I am saying "there are lessons to be learned from Deepseek".
I actually agree with you on the corporate bootstrap argument - I think we ought to be careful, because if they ever figure out how to control the output they will turn off outputs that help develop local models (gotta protect that moat!), but for now I use them myself to study and learn about building locally and I think everyone else ought to get on this train as well. For now, the robust academic discourse is a very very good thing.
People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.
That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.
The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.
Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.
In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.
I expect this to keep happening with any new tech that has the misfortune to get significant hype.
But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.
Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.
But they'd prefer if it was shorts.
Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.
Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.
It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.
To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!
* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.
I'm sure that there are some examples who enjoy it for the interface. I think CRT term/emulator is peak aesthetic. And a few who aren't willing to invest the time to use a gui an a terminal, and they learned the terminal first.
Calling either group a luddite is stupid, but if I was forced to defend one side. Given most people start with a gui because it's so much easier. I'd rather make the argument that those who never progress onto the faster more powerful options deserve the insult of luddite.
Well, LLMs don't fix that problem.
(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)
Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.
Please do no co-opt the term "hacker".
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces
The problem is the vast masses falling under Turing's Law:
"Any person who posts a sufficiently long text online will be mistaken for an AI."
Not usually in good faith however.
Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop
How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?
Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith
It’s the same problem with 9000 slop PRs submitted for code review
Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.
If we can't respect genuine content creators, why would anyone ever create genuine content?
I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.
The blanket bombing of "AI slop!" comments is counterproductive.
It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.
A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.
Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.
Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.
> Please do no co-opt the term "hacker".
Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.
In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.
Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.
It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.
Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.
Same. Back then, and even now, the people who were busy criticizing other people for using the wrong programming language, text editor, or operating system were a different set of people than the ones actually delivering results.
In a way it was like hacker fashion: These people knew what was hot and what was not. They ran the right window manager on the right hardware and had the right text editor and their shell was tricked out. They knew what to sneer at and what to criticize for fashion points. But actually accomplishing things was, and still is, orthogonal to being fashionable.
The gatekeepers wouldn't consider him a hacker, but that's kinda what he is now.
I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.
Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.
Ah yes, true hackers would never, say, build a Debian package...
Managing complexity has always been part of the game. To a very large extent it is the game.
Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.
> Please do no co-opt the term "hacker".
Indeed, please don't. And leave my true scotsman alone while we're at it!
I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.
> These people bring way more toxicity to daily life than who they wage their campaigns against.
I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.
I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.
What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).
And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.
If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.
What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?
The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.
You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.
Neither has been true for a really long time.
Until Thanksgiving.
You're probably fine as a more senior dev...for now.
But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.
This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.
Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.
LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.
On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.
I would have actually slowed him down.
I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.
But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.
Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.
And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.
And where will it be in 5-10 years?
Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".
Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.
If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.
One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.
The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.
How do you mean committed?
The bulk of that capex is chips, and those chips are straight up depreciating assets.
Instead, it's them that benefit the most from using them.
It's only management that believes otherwise. Because of deceitful marketing from a few big corporations.
What are your talking about? You seem to live in a parallel universe. Every single time I tried this or someone of my colleagues, this task failed tremendously hard.
This sounds kind of logical, but really isn't.
In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.
In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.
Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.
Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.
We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.
If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.
Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway.
Claude doesn't get better as I explain concepts to it the same way a jr engineer does.
Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")
I use LLMs to build isolated components and I do the work needed to specialize them for my tasks and integrate them together. The LLMs take fewer instructions to do this and handle ambiguity far better. Additionally because of the immediate feedback look on the specs I can try first with a minimally defined spec and interactively refine as needed. It takes me far less work to write specs for LLMs than it does for other devs.
I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.
What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?
It's not to help make developers more efficient with code assistants.
Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.
If you can't see this coming you lack both imagination and historical perspective.
Five years ago Claude Code would have been essentially unimaginable. Consider this.
So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.
Economic waves never hit one sector and stop. The waves continues across the entire economy. You can’t think “companies will get rid of huge amounts of labor” and then stop asking questions. You need to then ask “what will companies do with decreased labor costs?” And “what could that investment look like, who will they need to hit to fulfill it?” and then “what will those workers do after their demand increases?” And so on.
Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.
Programming has been devalued because more people can do it at a basic level with LLM tooling. People that I do not consider smart enough or to have put enough work in to output the things that they have nor do they really understand it themselves.
It is of course the new reality and now we all have to go find new markers/things to judge peoples output by. Thats the devaluation of the craft itself.
For what its worth, this devaluation has happened many times in this field. ASM, Compilers, managed gc languages, the cloud, abstractions have continually opened up the field to people the old timers consider unworthy.
LLMs are a unique twist on that standard pattern.
Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.
I mean, not anywhere. Their comment has little correlation with reality, and seems to be a contrived, self-comforting fiction. Most firms have implemented hiring freezes if not actively downsizing their dev staff. Many extremely experienced devs are finding the market absolutely atrocious, getting zero bites.
And for all of the "well us senior devs are safe" sentiment often seen on here, many shops seem to be more comfortable hiring cheap and eager junior devs and foregoing seniors because LLMs fill in a lot of the "grizzled wisdom". The junior to senior ratio is rapidly increasing, and devs who lived on golden handshakes are suddenly finding their ego bruised and a market where they're fighting for low-pay jobs.
I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default
A lot of newly skilled job applicants can't find anything in the job market right now.
There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.
But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.
If you don't trust me, give a non-programming job a try for 1 year and then come back and tell me how much more comfy $JOB was :)
This is a ridiculous statement. I know plenty of people (that are not developers) that make around the same as I do and enjoy their work as much as I do. Yes, software development is a great field to be in, but there's plenty of others that are just as good.
A lot of non-programmer jobs have a kind of union protection, pension plans and other perks even with health care. That makes a crappy salary and work environment bearable.
There was this VP of HR, in a Indian outsourcing firm, and she something to the effect that Software jobs appear like would pay to the moon, have an employee generate tremendous value for the company and general appeal that only smart people work these jobs. None of this happens with the majority of the people. So after 10-15 years you actually kind of begin to see why some one might want to work a manufacturing job.
Life is long, job guarantee, pensions etc matter far more than 'move fast and break thing' glory as you age.
I enjoy the practice of programming well enough but i do not at all love it as a career. I don't hate it by any means either but it's far from my first choice in terms of career.
I have a mortgage, 3 kids and a wife to support. So no. I don't think I'm going to do that. Also, I like my programming job.
EDIT: Sorry I thought you were saying the opposite. Didn't realize you were the OP of this thread.
Even after the layoffs, most big tech corps still have more employees today than they did in 2020.
The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.
[0]: https://www.dw.com/en/could-layoffs-in-tech-jobs-spread-to-r...
Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.
Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.
But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.
That is not unique to programming or tech generally. The overall US job market is kind of shit right now.
Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.
My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.
Now the job market is flooded due to layoffs, further justifying lack of comp adjustment - add inflation, and you have "de-valuing" in direct form.
In my Big Tech job, I sometimes forget that some people can really enjoy what they do. It seems like you're in a fortunate position of both high pay and high enjoyment. Congratulations! Out of curiosity, what do you work on?
But in general, every job I've had has been "high pay and high enjoyment" even when I initially had "shit pay" compared to other programmers, and the product wasn't really fun, I was still programming, an activity I still love.
Compare this to the jobs I did before, where the physical toll makes it impossible to do anything after work as you're exhausted, and even if I got paid more than my first programming job, that your body is literally unable to move once you get home, makes the pay matter less and feel less.
But for a programmer, you can literally sit still all day, have some meetings in a warm office, talk with some people, type some things into a document, sit and think for a while, and in the end of the month you get a paycheck.
If you never worked in another profession, I think you ("The Programmer") don't realize how lucky you are compared to the rest of the world.
I too have worked in shit jobs. I too appreciate that I am currently in a 70F room of my house, wearing a T-shirt and comfy pants, and able to pet my doggos at will.
I miss having jobs where at least a lot of the time i was moving around or working directly with other people. More than anything else i miss casual conversation with coworkers (which still happened with excruciating rarity even when i was doing most of my programming in an office).
I'm glad you love programming and find the career ideal. I don't mean to harp or whine, just pointing out your ideals aren't universal even amount programmers.
Don't get me wrong, it's a lot harder for new developers to enter the industry compared to a decade ago, even in Western Europe, but it's still way easier compared to the length people I know who aren't programmers or even in tech.
US data does back it up, though. The tech labor sector outperformed all others in the last 10 years. https://www.bls.gov/emp/tables/employment-by-major-industry-...
I don't know what kind of work you do but this depends a lot on what kind of projects you work on
Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.
I find it... very strange that you think software development is less mentally taxing than physical labor.
I find the more I specify about all the stuff I thought was hilariously pedantic hyper-analysis when I was in school, the less I have to interpret.
If you use test-driven, well-encapsulated object oriented programming in an idiomatic form for your language/framework, all you really end up needing to review is "are these tests really testing everything they should."
Why wouldn't the same happen here? Instead of these programmers jamming out boilerplate 24/7, why are they unable to improve their skill further and move with the rest of the industry, if that's needed? Just like other professions adopt to how society is shaped, why should programming be an exception to that?
There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.
If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.
What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.
If one doesn't subscribe to traditional Marxist ideology, this argument won't land the same way, but elements of these ideas have made their way into popular ideas of value.
Eh?
I'm happy for you (and envious), because that is not my experience. The job is hard. Agile's constant fortnightly deadlines, a complete lack of respect by the rest of the stakeholders for the work developers do (even more so now because "ai can do that"), changing requirements but an expectation to welcome changing requirements because that is agile, incredibly egotistical assholes that seem to gravitate to engineering manager roles, and a job market that's been dead for a few years now.
No doubt some will comment and say that if I think my job is hard I should compare it to a coal miner in the 1940's. True, but as Neil Young sang: "Though my problems are meaningless, that don't make them go away."
Are you sure about that?
Is there something specific you'd like to point me to, besides just replying with a soundbite?
Are you in China? India?
Also, i enjoy programming. Even typing boring shit as boilerplate because i keep my brain engaged. As much as i type i keep thinking, is this really necessary? and maybe figure out something leaner. LLMs want to deprive me of enjoyment of my work (research, learn) and of my brain. No thanks, no LLM for me. And i don't care whatever garbage it outputs, i'd much prefere if the garbage was your output, or you are useless.
The only use i have for LLMs and diffusion models is to entertain myself with stupid bullshit i come up with that i would find funny. I massively enjoy projects such as https://dumbassideas.com/
Note: Not taking into account the "classic" ML uses, my rant only going to LLMs and the LLM craze. A tool made by grifters, for grifters.
You do realise your position of luck is not normal, right? This is not how your average Techie 2025 is.
Actual data is convincing; few are providing it.
And even if I'm experienced now, I still have peers and acquaintances who are getting into the industry, I'm not sitting in my office with my eyes closed exactly.
software engineering is easy? you live in bubble, try teaching programming to someone new to it and you'll realize how muuuuch effort it requires
Of course if you're in south eastern europe or in south asia where all the jobs are being offshored you're having the time of your life.
I don't know what else to say except that hasn't been my experience personally, nor the experience of my acquaintances who've re-skilled to become programmers these last few years, in Western Europe.
https://finance.yahoo.com/news/tech-job-postings-fall-across...
> Among the 27 countries analysed, European nations saw the steepest fall in tech job postings between 1 February 2020 and 31 October 2025,
> In absolute terms, the decline exceeded 40% in Switzerland (-46%) and the UK (-41%), with France (-39%) close behind.
> The United States showed a similar trend, with a decline of 35%. Austria (-34%), Sweden (-32%) and Germany (-30%) were also at comparable levels.
Don’t close your eyes and plug your ears and pretend you didn’t hear anything.
I'm not paid enough to clean up shit after an AI. Behind an intern or junior? Sure, I enjoy that because I can tell them how shit works, where they went off the rails, and I can be sure they will not repeat that mistake and be better programmers afterwards.
But an AI? Oh good luck with that and good luck dealing with the "updates" that get forced upon you. Fuck all of that, I'm out.
I enjoy making things work better. I'm lucky in that, because there's always been more brownfield work than greenfield work. I think of it as being an editor, not an author.
Hacking into vibe code with a machete is kinda fun.
The part where writing performant, readable, resilient, extensible, and pleasing code used to actually be a valued part of the craft? I feel like I'm being gaslit after decades of being lectured on how to be a better software developer, only to be told that my craft is pointless, the only thing of value is the output, and that I should be happy spending my day babysitting agents and reviewing AI code slop.
Not that there's anything wrong with crafting, but for those of us who just care about building things, LLM's are an absolute asset.
But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.
If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?
Honestly I think you’re swallowing some of the hype here.
I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.
The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.
The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.
I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.
There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.
I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.
AI just can't cope with this yet. So my team has been told that we are too slow.
Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).
I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"
It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.
The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.
LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.
This is so incredibly true.
This is my biggest gripe with LLM use in practice.
The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.
edit: sentence fixed.
You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.
We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.
Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.
Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.
I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.
I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.
We've been in that era for at least two decades now. We just only now invented the steam engine.
> I wonder how long it takes until this comes all crashing down.
At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.
Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet
The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.
Perhaps that’s what lead to a decline in accountability and quality.
That's obvious. It's a matter of which makes it more likely
Is Good Engineering possible with LLMs? I remain skeptical.
But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.
There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?
My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.
Instead of earning a very good living by making boutique software for paying users.
I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.
You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)
It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.
Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.
Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?
It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.
I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.
I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.
There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.
> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.
Citation needed for a clam of that magnitude.
This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.
We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.
I share that concern about massive, unforced centralization. If there were any evidence for the hypothesis that LLM inference would always remain viable in datacenters only, I'd be extremely concerned about their use too.
But from all I've seen, it seems overwhelmingly likely that we'll have very powerful ones in our phones in at most a few years, and definitely in midrange laptops and above.
Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.
It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.
The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.
I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.
Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?
Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?
> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us
What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?
Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.
Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.
If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...
Who says we haven't tried it out?
FTA.
I know tons of people where "tried it out" means they've seen Google's abysmal search summary feature, or merely seen the memes and read news articles about how it's wrong sometimes, and haven't explored any further.
They seem just as enthusiastic as many of the pro AI voices here on HN, while the quality of their work declines. It makes me extremely skeptical of anyone who is enthusiastic about AI. It seems to me like it's a delusion machine
I was surprised how hard many here fell for the NFT thing, too.
Various people have been wrong on various predictions in the past, and it seems to me that any implied strong overlap is anecdotal at best and wishful (why?) thinking at worst.
The only really embarrassing behavior is never updating your priors when your predictions are wrong. Also, if you're always right about all your prognoses, you should probably also not be in the HN comments but on a prediction market, on-chain or traditional :)
Maybe so, but would it be possible to not dismiss it elsewhere? I just don't see the causal relation between AI and crypto, other than that both might be completely overhyped, world-changing, or boringly correctly estimated in their respective impact.
Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.
Sure, it can be overdone. But at the same time, it shouldn't be undersold.
Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.
[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...
The complexity of good code, is still complicated.
which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.
And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.
In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.
I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.
The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.
I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?
(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).
In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.
On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.
But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.
So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.
That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?
I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").
Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.
ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.
And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.
See the nuclear bomb for an example.
Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?
Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.
So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.
Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.
> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.
So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.
I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced
There was progress in the Middle Ages, hence the difference between the early and late Middle Ages. Most information was mouth to mouth instead of written down.
"The term employs traditional light-versus-darkness imagery to contrast the era's supposed darkness (ignorance and error) with earlier and later periods of light (knowledge and understanding)."
"Others, however, have used the term to denote the relative scarcity of written records regarding at least the early part of the Middle Ages"
I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.
If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying
You don't like leaded gasoline? You must want us to walk everywhere. Come on...
Speaking of wonky analogies, have you considered that other people have access to these hammers and are aiming for your head ? And that some people might not want to be hit on the head by a hammer
This is not an interesting conversation.
If an amazing world changing technology like LLMs shows up on your doorstep and your response is to ignore it and write blog posts about how you don't care about it then you aren't curious and you aren't really a hacker.
Edit: Ha I see you edited "empty the dishwasher" to "hand wash the dishes". My thoughts exactly.
There's no hope for these people.
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
My relative came to me to make a small business website for her. She knew I was a "coder". She gave me a logo and what her small business does.I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.
My relative was extremely happy with the result.
It took me about 10 minutes to do all of this.
In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.
I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.
These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.
https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...
There is something to be said for the protective shell of a vehicle.
But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.
That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.
In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.
If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.
If a mobile phone refuses to connect to a cellular tower, I get another one.
What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.
Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.
My regular software breaks constantly. All the time. It's a rare day where everything works as it should.
LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.
There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.
Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.
That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.
"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.
But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.
Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.
the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.
It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).
The problem is that, historically speaking, you have two choices;
1. Resist as long as you can, risking being labeled a Luddite or whatever.
2. Acquiesce.
Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.
My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.
Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.
To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.
At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.
And by any metric, the average citizen of a developed country is wildly better off than a century or two ago. All those moments of change in the past that people wrung their hands over ultimately improved our lives, and this probably won’t be any different.
It's nuanced, can be abused, but can be beneficial when used responsibly in certain ways. It's a tool. It's a powerful tool, so treat it like a powerful tool: learn about it enough to safely use it in a way to improve your life and those around you.
Avoiding it completely whilst confidently berating it without experience is a position formed from fear, rather than knowledge or experience. I'm genuinely very surprised this article has so many points here.
I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.
At any rate, thank you for the thoughtful & eloquent words of caution.
I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.
So what people do is collecting documentations. Give them a glance (or at least the TOC), the start the process to understand the concepts. Sure you can ask the escape code for setting a terminal title, but will it says that not all terminals support that code? Or that piping does not strip out escape codes? That’s the kind of gotchas you can learn from proper manuals.
There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.
All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.
Anyway, the point I'm getting to was it was glorious to understand what every bit of every register and every I/O register did. There were NO interposing layers of software that you didn't write yourself or didn't understand completely. I even wrote a disassembler for the BASIC ROM and spend many hours studying it so I could take advantage of useful subroutines. People even published books that had that all mapped out for you (something like "Secrets of the TRS-80 ROM Decoded").
Recently I have been helping a couple teenagers in my neighborhood learn Python a couple hours a week. After installing Python and going through the foundational syntax, you bet I had them write many of those same games. Even though it was ASCII monsters chasing their character on the screen, they loved it.
It was similar to this, except it was real-time with a larger playfield:
https://www.reddit.com/r/retrogaming/comments/1g6sd5q/way_ba...
Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.
When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.
I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.
I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.
Either way its a lost cause.
Without an explanation of what they author is calling out as flaws, it is hard to take this article seriously.
I know engineers I respect a ton who have gotten a bunch of productivity upgrades using "AI". My own learning curve has been to see Claude say "okay, these integration tests aren't working. Let me write unit tests instead" and go on when it wasn't able to fix a jest issue.
Just the job for an AI agent!
So what I did is this - I wrote the app in Django, because it's what I'm familiar with.
Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.
There we go. Very fast and natural searching that Does What You Mean (mostly).
No Artificial Intelligence.
All done with Organic Home-grown Brute Force and Ignorance.
Because that's sometimes just what you need.
It seems that most people preferring natural language over programming languages don't want to learn the required programming language and ending up reinventing their own worse one.
There is a reason why we invented programming languages as an interface to instruct the machine and there is a reason why we don't use natural language.
I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.
At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
I don't understand how anyone can work like that and have confidence in their code.
Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.
You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.
The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.
LLMs may _appear_ to have a theory about a program, but this is an illusion.
To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.
Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?
Are they worry-free and just run with it?
Not asking rhetorically, I seriously want to know.
Poll this API endpoint in this file and populate the context with the result. Only a few lines of code.
Update all API calls to that endpoint with a view into the context.
I can give the AI those steps as a list and go adjust styles on the page to my liking while it works. This isn’t the kind of parallelism I’ve found to be common with LLMs. Often you are stuck on figuring out a solution. In that case AI isn’t much help. But some code is mostly boilerplate. Some is really simple. Just always read through everything it gives you and fix up the issues.
After that sequence of edits I don’t feel any less knowledgeable of the code. I completely comprehend every line and still have the whole app mapped in my head.
Probably the biggest benefit I’ve found is getting over the activation energy of starting something. Sometimes I’d rather polish up AI code than start from a blank file.
A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.
In the end? no one cares. I get just as much done (maybe more), while doing less work. Maybe some of my skills will atrophy, but I'll strengthen others.
I'm still auditing everything for quality as I would my own code before pushing it. At the end of the day, it usually makes fewer typos than I would. It certainly searches the codebase better than I do.
All this hype on both ends will fade away, and the people using the tools they have to get things done will remain.
The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.
What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?
perhaps a being that has the capacity for intention and will?
Knowledge is shaped by constraints which inform intention, it doesn't "drive it."
"I want to fly, I intend to fly, I learn how to achieve this by making a plane."
not
"I have plane making knowledge therefore I want and intend to fly"
However, I totally understand that constraints often create a feedback loop where reasoning is reduced to the limitations which confine it.
My Mom has no idea that "her computer" != "windows + hp + etc", and if you were to ask her how to use a computer, she would be intellectually confined to a particular ecosystem.
I argue the same is true for capitalism/dominant culture. If you can't "see" the surface of the thing that is shaping your choices, chances are your capacity for "will" is hindered and constrained.
Going back to this.
> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
I don't think my very ability to make choices comes from owning stuff and knowing people.
And no, I don't need AI for this level of inquiry.
I think its amazing what giant vector matrices can do with a little code.
Writing code is very easy if you know the solution and the semantics of the coding platform. But knowing the solution is a difficult task, even in a business settings where the difficulty are more communication issues. Knowing the semantics of the coding platform is also a difficult one, because you’ll probably be using others’ code and you’ll face the same communication issue (lack of documentation, erroneous documentation, etc…)
So being good at programming does not really means knowing code. It’s more about knowing how to bypass communication barriers to get the knowledge you need.
This article lacks nuance, and could be summarized as "LLMs are bad" Later, I suspect this author (and others of this archetype) will moderate and lament "What I really meant was: I don't like corporations lying about LLMs, or using them maliciously; I didn't imply they don't have uses". The words in the article do not support this.
I believe this pattern is rooted in social-justice-oriented (Is that still the term?) USA left politics. I offer no explanation for this conflation, but an observation.
Opting in to weirdness and curiosity is the only bug worth keeping which will eventually become a norm
When the AI hype is over and the bubble has burst, I'll still be here, writing quality software using my brain and my fingers, and getting paid to do it.
This is such a bizarre sentiment for any person interested in technology. AI is, without any doubt, the most fascinating and important technology I have seen developed in my lifetime. A decade ago the idea of a computer not only holding a reasonable conversation with a human, but being able to talk with a human on deep and complex subjects seemed far out of reach.
No doubt there are many deep running problems with it, any technology with such a radical breakthrough will have them. But none of that takes away from how monumental of an achievement it is.
Looking down at people for using it or being excited about it is such an extreme position. Also the insinuation that the only reason anybody uses it because they are forced into it, is completely bizarre.
The big tech will build out compute in a never seen speed and we will reach 2e29 Flops faster than ever.
Big tech is competing with each other and they are the ones with the real money in our capitalistic world but even if they would find some slow down between each others, countries are also now competing.
In the next 4 years and the massive build out of compute, we will see a lot clearer how the progress will go.
And either we hit obvous limitations or not.
If we will not see an obvious limitation, fionas opinion will have 0 relevance.
The best chance for everyone is to keep a very very close eye on AI to either make the right decisions (not buying that house with a line of credit; creating your own product a lot faster thanks to ai, ...) or be aware what is coming.
Thanks for the fish and enjoy the ride.
I don't get all the whining of people about having to adapt. That's a constant in our industry and always has been. If what you were doing was so easy that it fell victim to the first generation of AI tools that are doing a decent enough job of it, then maybe what you were doing was a bit Ground Hog day to begin with. I've certainly been involved with a lot of projects where a lot of the work felt that way. Customer wants a web app thing with a log in flow and a this and a that. 99% of that stuff is kind of very predictable. That's why agentic coding tools are so good at this stuff. But lets be honest, it was kind of low value stuff to begin with. And it's nice that people over-payed for that for a while but it was never going to be forever.
There's still plenty of stuff these tools are less good at. It gets progressively harder if you are integrating lots of different niche things or doing some non standard/non trivial things. And even those things where it does a decent job, it still requires good judgment and expertise to 1) be able to even ask for the right thing and then 2) judge if what comes back is fit for purpose.
There's plenty of work out there supporting companies with decades of legacy software that are not going to be throwing away everything they have overnight. Leveling up their UIs with AI powered features, cross integrating a lot of stuff, etc. is going to generate lots of work and business. And most companies are very poorly equipped to do that in house even if they have access to agentic coding tools.
For me AI is actually generating more work, not less. I'm now taking on bigger things that were previously impossible to take on without involving more people. I have about 10x more things I want to do than I have bandwidth for. I have to take decisions about doing things the stupid old way because it's better/faster or attempting to generate some code. All new tools do is accelerate the pace and raise the ambition levels. That too is nothing new in our industry. Things that were hard are now easy, so we do more of them and find yet harder things to do next. We're not about to run out of hard things to do any time soon.
Adapting is hard. Not everyone will manage. Some people might burn out doing that or change career. And some people are in denial or angry about that. And you can't really expect others to loose a lot of sleep over this. Whether that's unfair or not doesn't really matter.
Generally speaking people just cannot really think this way. People broadly are short term thinkers. If something is convenient, people will use it. Is it easier to spray your lawn with pesticides? Yep, cancer (or biome collapse) is a tomorrow problem and we have a "pest" problem today. Is it difficult to sit alone with your thoughts? Well good news, Youtube exists and now you don't have to. What happens next (radicalization, tracking, profiling, propaganda, brain rot) is a tomorrow problem. Do you want to scroll at the end of the day and find out what people are talking about? Well, social media is here for you. Whether or not it's accidentally part of a privatized social credit system? Well again, that 's a problem for later. I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._
I don't see any way out of it. People can't seem to avoid these patterns of behavior. People asking for regulation are about as realistic as people hoping for abstinence. It's a correct answer in principle but just isn't going to happen.
I think that can be offset if you have a strong motivation, a clear goal to look forward to in a reasonable amount of time, to help you endure through the discomfort:
Before I had enough financial independence to be able to travel at will, I was often stuck in a shit ass city, where the most fun to be had was video games and fantasizing about my next vacation coming up in a month or 2, and that helped me a lot in coping with my circumstances.
Too few people are allowed or can afford even this luxury of a pleasant future, a promise of a life different/better than their current.
I wonder how much of that is "nature vs. nurture"?
Like the Tolkienesque elves in fantasy worlds, would humans be more chill too if our natural lifespans were counted in centuries instead of decades?
Or is it the pace of society, our civilization, that always keeps us on edge?
I mean I'm not sure if we're born with a biological sense of mortality, an hourglass of doom encoded into our genes..
What if everybody had 4 days of work per week, guaranteed vacation time every few months, kids didn't have to wake up at 7/8 in the morning every day, and progress was measured biennially, e.g. 2 years between school grades/exams, and economic performance was also reviewed in 2 year periods, and so on, could we as a species mellow the fuck out?
Dogs barely set food aside; they prefer gorging, which is a good survival technique when your food spoils and can be stolen.
Bees, at the other end of the spectrum, spend their lives storing food (or "canning", if you will - storing prepared food).
We first evolved in areas that were storage-adverse (Africa), and more recently many of us moved to areas with winters (both good and needful storage). I think "finish your meal, you might not get one tomorrow" is our baseline survival instinct; "Winter is coming!" is an afterthought, and might be more nurture-based behavior than the other.
For the first time in human history most people don't have to worry about famine, wars, disasters, or disease upending their lives; they can just wait it out in their homes.
Will that eventually translate to a more relaxed "instinct"?
... but, it is definitely worth considering whether the status quo is tolerable and whether we as technical creatives are willing to work with tools that live within it.
AI lets you do that faster.
AI may suggest a dumb way, so you have to think, and tell it what to do.
My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!
Don't let AI think for you. Do actual intensional arch design.
Programmers that don't know CS who only care about hammering the keyboards because they're artisans have little future.
AI also give me back my hobby after having kids -- time is valuable, and AI is energy efficient.
We are truly living in a cambrian explosion -- lot of slop will be produced, but market and selection pressure will weed those out.
"It took both time and experience before the workers learned to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used."
- Karl Marx. Das Kapital Vol 1 Ch 15: Machinery and Modern Industry, 1867
Tech can always be good, how its used is what makes it bad, or not.
The first aspect is the “I don’t touch AI with a stick”. AI is a tool. Nobody is obligated to touch it obviously, but it is useful in certain situations. So I disagree with the author’a position to avoid using AI. It reads like stubbornness for the sake of avoiding new tech.
The second angle is the “bigtech corporate control” angle. And honestly, I don’t get this argument at all. Computers and the digital world has created the biggest distopian world we have ever witnessed. From absurd amounts of misinformation and propaganda fueled by bot farms operated at government levels, all the way to digital surveillance tech. You have that strong of an opinion against big tech and digital surveillance, blaming AI for that, while enjoying the other perils of big tech, is virtue signaling.
Also, what’s up with the overuse of “fascism” in places where it does not belong?
Is AI resource-intensive by design? That doesn’t make any sense to me. I think companies are furiously working toward reducing AI costs.
Is AI a tool of fascism? Well, I’d say anything that can make money can be a tool of fascism.
I can sort of jive with the argument that AI is/will be reinforcing the ideals of those in power, although I think traditional media and the tooling that AI intends to replace like search engines accomplished that just fine.
What we are left with is, I think, an author who is in denial about their special snowflake status as a programmer. It was okay for the factory worker to be automated away, but now that it’s my turn to be automated away I’m crying fascism and ethics.
Their friends behave the way they do about AI because they know it’s useful but know it’s unpopular. They’re trying to save face while still using the tool because it’s so obviously useful and beneficial.
I think the analogy is similar to the move from film to digital. There will be a tiny amount of people who never buy in, there will be these “ashamed” adopters who support the idea of film and hope it continues on, but for themselves personally would never go back to film, and then the majority who don’t see the problem with letting film die.
I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.
This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.
I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.
I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!
AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.
I will, however, get worried when AIs start running businesses. Then we are in trouble.
The entire open source movement would like a word with you.
I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.
It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.
Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.
Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’
The youthful desire to rage against the machine?
It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.
AI tools can be used to help against the systems of power.
AI speeds me up a tremendous amount in my day job as a product engineer.
We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.
So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.
I think we need more "manual" choice, and more agency.
I just think the things they are effective at are a net negative for most of us.
Think of old SAP systems with a million obscure customization - any medium to large codebase that is mostly vibe coded is instantly legacy code.
In your hole analogy: People don't care if a mine is dug by a bot or planned by humans until there is structural integrity issues or tunnels that are collapsing and nobody is able to read the map properly.
It is the tool obsessed people who treat everything like a computer game that like "AI" for software engineering. Most of them have never written anything substantial themselves and only know the Jira workflow for small and insignificant tickets.
What a loaded sentence lol. Implying being a hacker has some correlation with being progressive. And implying somehow anti-AI is progressive.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
Really? So we're not going to see AI users celebrating over how much less power DeepSeek used, right?
Anyway guess what else is resource intensive? Making chips. Follow the line of logic you will find computers consolidate powers and real progressive hackers should use pencil and paper only.
Back to the first paragraph...
> almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.
The irony is off the roof. This article is essentially: when I use computational power how I like, it's being a hacker. When others use computational power their way, it's being fascists.
I didn't read it that way. "Progressive hacker circles" doesn't imply that all hackers are progressive, it can just be distinguishing progressive circles from conservative ones.
I mean, yeah, that kind of checks out. The quoted part doesn't make much sense to me, but that most hackers are progressives (as in "enact progress by change", not the twisted American version) should hardly come as a surprise. The opposite would be that a hacker could be a conservative (again, not the US version, but the global definition; "reluctant to change"), which is pretty much a oxymoron. Best would be to eschew political/ideological labels in total, and just say we hackers are unpolitical :)
- inventing scientific racism and (after that was debunked) reinventing other academic pretenses to institutionalize race-base governance and society
- forcibly sterilizing people with mental illnesses until the 1970s, through 2005 via coercion, and until the present via lies, fake studies, and ideological subversion
- being outspokenly antisemitic
Personally, I think it’s a moral failing we allow such vile people to pontificate about virtues without being booed out of the room.
cachonk!
snap your cuffs, wait fot it eyebrows!
and demonstrate your mastery ,to the muterings of the golly gee's
it will last several more months untill the , GASP!!!, bills ,maintenance costs, regulatory burdens, and various legal issues combine to, pop AI's balloon, where then AI will be left automating all of the tedious, but chair filling, beurocratic/secretarial/appretice positions through out the white collar world. technology is slowly pushing into other sectors, where legacy methods and equipment can now be reduced to a free app on a phone, more to the point, a free, local only app. fact is that we are way over siliconed going forward and that will bite as well, terra bite phones for $100, what then?
My opinion: This sort of low-evidence writing is all too common in tech circles. It makes me wish computer science and engineering majors were forced to spend at least one semester doing nothing but the arts.
The most striking inconsistency emerges in how the author frames the people who use LLM tools. Early in the piece, colleagues experimenting with AI coding assistants are described in the language of addiction and pathology: they are “sucked into the belly of the vibecoding grind,” experiencing “existential crisis,” engaged in “harmful coping.” The comparison to watching a friend develop a drinking problem is explicit and damning. This framing treats AI adoption as a personal failure, a weakness of character, a moral lapse. Yet only paragraphs later, the author pivots to acknowledging that people are “forced to use these systems” by bosses, UI patterns, peer pressure, and structural disadvantages in school and work. They even note their own privilege in being able to abstain. These two framings cannot coexist coherently. If using AI tools is coerced by material circumstances and power structures, then the addiction metaphor is not just inapt but cruel — it assigns individual blame for systemic conditions. The author wants to have it both ways: to morally condemn users while also absolving them as victims of circumstance.
This tension extends to the author’s treatment of their own social position. Having acknowledged that abstention from LLMs requires privilege, they nonetheless continue to describe AI adoption as a “brainworm” that has infected even “progressive hacker circles.” The disgust is palpable. But if avoiding these tools is a luxury, then expressing contempt for those who cannot afford that luxury is inconsistent at best and self-congratulatory at worst. The acknowledgment of privilege becomes a ritual disclaimer rather than something that actually modifies the moral judgments being rendered.
The author’s claims about intentionality represent another significant weakness. The assertion that AI systems being resource-intensive “is not a side effect — it’s the point” is presented as revelation, but it functions as an unfalsifiable claim. No evidence is offered that anyone designed these systems to be resource-hungry as a mechanism of control. The technical requirements of training large models, competitive market pressure to scale, and the emergent dynamics of venture capital investment all offer more parsimonious explanations that don’t require attributing coordinated malicious intent. Similarly, the claim that “AI systems exist to reinforce and strengthen existing structures of power and violence” is stated as though it were established fact rather than contested interpretation. This is the central claim of the piece, and yet it receives no argument — it is simply asserted and then built upon, which amounts to begging the question.
The essay also suffers from a pronounced selection bias in its examples. Every person described using AI tools is in crisis, suffering, or compromised. No one uses them mundanely, critically, or with benefit. This creates a distorted picture that serves rhetorical purposes but does not reflect the range of actual use cases. The author’s friends who share their anti-AI sentiment are mentioned approvingly, establishing clear in-group and out-group boundaries. This is identity formation masquerading as analysis — good people resist, compromised people succumb.
There is a false dichotomy running through the piece that deserves attention. The implied choice is between the author’s total abstention, not touching LLMs “with a stick,” and being consumed by the pathological grind described earlier. No middle ground exists in this telling. The possibility of critical, limited, or thoughtful engagement with these tools is never acknowledged as legitimate. You are either pure or contaminated.
Reality doesn’t work this way! It’s not black and white. My take: AI is a transformative technology and the spectrum of uses and misuses of AI is vast and growing.
The philosophical core of their argument also contains an unexamined equivocation. The author invokes the extended cognition thesis — the idea that tools become part of us and shape who we are — to make AI seem uniquely threatening. But this same argument applies to every tool mentioned in the piece: hammers, pens, keyboards, dictionaries. The author describes their own fingers “flying over the keyboard, switching windows, opening notes, looking up words in a dictionary” as part of their extended cognitive process. If consulting a dictionary shapes thought and becomes part of our cognitive process, what exactly distinguishes that from asking a language model to check grammar or suggest a word? The author never establishes what makes AI categorically different from the other tools that have already become part of us. The danger is assumed rather than demonstrated.
There is also a genetic fallacy at work in the argument about power. The author suggests AI is bad partly because of who controls it — surveillance capitalists, fascists, those with enormous physical infrastructure. But this argument conflates the origin and ownership of a technology with its inherent properties. One could make identical arguments about the printing press, the telephone, or the internet itself. The question of whether these tools could be structured differently, owned differently, or used toward different ends is never engaged. Everything becomes evidence of a monolithic system of control.
Finally, there is an unacknowledged irony in the piece’s medium and advice. The author recommends spending less time on social media and reading books instead, while writing a blog post clearly designed for social sharing, complete with the vivid metaphors, escalating moral stakes, and calls to action that characterize viral content. The post exists within and depends upon the very attention economy it criticizes. This is not necessarily hypocrisy — we all must operate within systems we find problematic — but the lack of self-awareness about it is notable given how readily the author judges others for their compromises.
The essay is most compelling when it stays concrete: the phenomenology of writing as discovery, the real pressures workers face, the genuine concerns about who controls these systems and toward what ends. It is weakest when it reaches for grand unified theories of intentional domination, when it mistakes assertion for argument, and when it allows moral contempt to override the structural analysis it claims to offer. The author clearly cares about human flourishing and autonomy, but the piece would be stronger if that care extended more generously to those navigating these technologies without the privilege of refusal.
I didn't hear the author criticizing the character of their colleagues. On the contrary, they wrote a whole section on how folks are pressured or forced to use AI tools. That pressure (and fear of being left behind) drives repeated/excessive exposure. That in turn manifests as dependence and progressive atrophy of the skills they once had. Their colleagues seem aware of this as evidenced by "what followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine". When you're dependent on something, you can always find a 'reason'/excuse to use. AA and other programs talk about this at length without morally condemning addicts or assigning individual blame.
> For most of us, self-justification was the maker of excuses; excuses, of course, for drinking, and for all kinds of crazy and damaging conduct. We had made the invention of alibis a fine art. [...] We had to drink because at work we were great successes or dismal failures. We had to drink because our nation had won a war or lost a peace. And so it went, ad infinitum. We thought "conditions" drove us to drink, and when we tried to correct these conditions and found that we couldn't to our entire satisfaction, our drinking went out of hand
Framing something as addictive does not necessarily mean that those suffering from it are failures/weak/immoral but you seem to have projected that onto the author.
Their other analogy ("brainworm") is similar. Something that no-one would willingly sign up for if presented with all the facts up front but that slips in and slowly develops into a serious issue. Faced with mounting evidence of the problem, folks have a strong incentive to downplay the issue because it's cognitively uncomfortable and demands action. That's where the "harmful coping" comes in: minimizing the severity of the problem, avoiding the topic when possible, telling yourself or others stories about how you're in control or things will work out fine, etc.
> “We programmers are currently living through the devaluation of our craft”
my interpretation of what the author means by devaluation is the general trend that we’re seeing in LLMs
The theory that I hear from investors is as LLMs generally improve, there will exist a day where a LLMs default code output, coupled with continued hardware speeds, will become _good enough_ for the majority of companies - even if the code looks like crap and is 100x slower than it needs to be
This doesn’t mean there won’t be a few companies that still need SWEs to drop down and do engineering, but tbh, the majority of companies today just need a basic web app - and we’ve commoditized web app dev tools to oblivion. I’d even go as far to argue that what most programmers do today isn’t engineering, it’s gluing together an ecosystem of tooling and or API’s.
Real engineering seems to happen outside of work on open source projects, at the mav 7 on specialized teams, or at niche deeply technical startups
EDIT: I’m not saying this is good or bad, but I’m just making the observation that there is a trend towards devaluing this work in the economy for the majority of people, and I generally empathize with people who just want stability and to raise a family within reasonable means
Now I write just about everything in Rust because why not? If I can vibe code Rust about as fast as Python, why would I ever use Python outside of ML?