I don't care how well your "AI" works
310 points
6 hours ago
| 60 comments
| fokus.cool
| HN
easterncalculus
1 hour ago
[-]
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.

reply
mattgreenrocks
49 minutes ago
[-]
> This is the culture that replaced hacker culture.

Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.

This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.

We sold out the culture, which paved the way for it to be hollowed out by LLMs.

There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.

In short, it's time to wake up.

reply
quantummagic
22 minutes ago
[-]
> we need to create a culture that values craftmanship and dignifies work done by developers.

Mostly I agree with you. But there's a large group of people who are way too contemptuous of craftsmen using AI. We need to push back against this arrogant attitude. Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.

reply
cool_dude85
13 minutes ago
[-]
>Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.

Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?

reply
bigstrat2003
6 minutes ago
[-]
That's because there's nothing "craftsman" about using AI to do stuff for you. Someone who uses AI to write their programs isn't the equivalent of a carpenter using a table saw, they are the equivalent of a carpenter who subcontracts the piece out to someone else. And we wouldn't show respect to the latter person either.
reply
robot-wrangler
23 minutes ago
[-]
I realized recently that if you want to talk about interesting topics with smart people, if you expect things like critical thinking and nuanced discussion, you're currently much better off talking literature or philosophy than anything related to tech. I mean, everyone knows that discussing politics/economics is rather hopelessly polarized, everyone has their grievances or their superstitions or injuries that they cannot really put aside. But this is a pretty new thing that discussing software/engineering on merits is almost impossible.

Yes, I know about the language / IDE / OS wars that software folks have indulged in before. But the reflexive shallow pro/anti takes on AI are way more extreme and are there even in otherwise serious people. And in general anti-intellectual sentiment, mindless follow-the-leader, and proudly ignorant stances on many topics are just out of control everywhere and curiosity seems to be dead or dying.

You can tell it's definitely tangled up with money though and this remains a good filter for real curiosity. Math that's not maybe related to ML is something HN is guaranteed to shit on. No one knows how to have a philosophy startup yet (WeWork and other culty scams notwithstanding!). Authors, readers, novels, and poetry aren't moving stock markets. So at least for now there's somewhere left for the intellectually curious to retreat

reply
majormajor
11 minutes ago
[-]
I don't really see it any different than the Windows/Unix, Windows/Mac, etc, flame wars that boiled even amongst those with no professional stake it in for decades. Those were otherwise serious people too, parroting meaningless numbers and claims that didn't actually make much of a difference to them.

If anything, the AI takes are more much more meaningful. A Mac/PC flame war online was never going to significantly affect your career. A manager who either is all-in on AI or all-out on it can.

reply
underlipton
27 minutes ago
[-]
Get some black dudes involved. Black dudes know how to keep it real.

(It will be interesting to see many people are able to accurately draw the conceptual line between what part of my sentiment is a joke and what part is serious.)

reply
ukFxqnLa2sBSBf6
7 minutes ago
[-]
I consider myself progressive and my main issue with the technology is that it was created by stealing from people who have not been compensated in any way.

I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.

reply
amarant
46 minutes ago
[-]
It's Turing's Law:

Any person who posts a sufficiently long text online will be mistaken for an AI.

reply
LocalH
15 minutes ago
[-]
A good start (albeit the most basic one) would be to encourage budding hackers to read through the Jargon File.
reply
GrantMoyer
18 minutes ago
[-]
I largely agree with this, but at the same time, I empathize with the FA's author. I think it's because LLMs feel categorically different from other technological leaps I've been exited about.

The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.

reply
chemotaxis
22 minutes ago
[-]
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.

It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.

For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?

By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?

Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.

reply
bill3389
6 minutes ago
[-]
unfortunately, it will be less and less purely human generated content any more. it will be more and more AI generated or AI assisted content in the future.

We are angry because we grow up in an age that content are generated by human and computer bot are inefficient. however, for newer generation, AI generated content will be a new normal, like how we see people from a big flat box (TV)

reply
ronsor
8 minutes ago
[-]
The only thing more insufferable than the "AI do everything and replace everyone" crowd is the "AI is completely useless" crowd. It's useful for some things and useless for others, just like any other tool you'll encounter.
reply
nrclark
30 minutes ago
[-]
Ironically, the actual luddites weren't anti-technology at all. Mechanized looms at the time produced low-quality, low-durability cloth at low prices. The luddite pushback was more about the shift from durable to disposable.

It's a message that's actually pretty relevant in an age of AI slop.

reply
anubistheta
4 minutes ago
[-]
I agree. I think this is what happens when a persons transitions from a progressive mindset to a conservative one, but has made being "progressive" a central tenant of their identity.

Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.

In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.

>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.

>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves

To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".

reply
shadowgovt
15 minutes ago
[-]
Hackers in the '80s were taking apart phone hardware and making free long-distance calls because the phone company didn't deserve its monopoly purely for existing before they were born. Hackers in the '90s were bypassing copyright and wiping the hard drive of machines they cobbled together out of broken machines to install an open source OS on it so that Redmond, WA couldn't dictate their computing experience.

I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.

... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.

... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.

reply
pksebben
35 minutes ago
[-]
Likely progressive, but definitely not luddite [0]. Anti-capitalist for sure.

I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.

Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.

I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.

We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.

Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.

0 - https://phrack.org/issues/7/3

reply
billy99k
14 minutes ago
[-]
How are unions any better than mega corps? My brother is part of a union and the leaders make millions.

He's pigenholed at the same low pay rate and can't ever get a raise, until everyone in the same role also gets a raise (which will never happen). It traps people, because many union jobs can't or won't innovate, and when they look elsewhere, are underskilled (and stuck).

You mention 'deepseek'. Are you joking? It's owned by the Chinese government..and you claim to hate fascism? Lol?

Big companies only have the power now, because the processing power to run LLMs is expensive. Once there are break throughs, anyone can have the same power in their house.

We have been in a tech slump for awhile now. Large companies will drive innovations for AI that will help everyone.

reply
pksebben
30 seconds ago
[-]
That's not a union - that's another corporate entity parading as a union. A union, operating as it should, is governed by the workers as a collective and enriches all of them at the same rate.

Deepseek is open source, which is why I mention it. It was made by the Chinese government but it shows a way to create these models at vastly reduced cost and was done with transparent methodology so we can learn from it. I am not saying "the future is Deepseek", I am saying "there are lessons to be learned from Deepseek".

I actually agree with you on the corporate bootstrap argument - I think we ought to be careful, because if they ever figure out how to control the output they will turn off outputs that help develop local models (gotta protect that moat!), but for now I use them myself to study and learn about building locally and I think everyone else ought to get on this train as well. For now, the robust academic discourse is a very very good thing.

reply
poszlem
1 hour ago
[-]
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.

reply
codeflo
1 hour ago
[-]
You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.
reply
Blackthorn
55 minutes ago
[-]
> and technology stopped being the object of interest in those spaces.

That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.

The "something" that happened was ads. They poisoned all the fun and interest out of technology.

Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.

reply
DennisP
2 minutes ago
[-]
A lot of fun new technology gets shouted down by reactionaries who think everything's a scam.

Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.

In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.

I expect this to keep happening with any new tech that has the misfortune to get significant hype.

reply
bikelang
3 minutes ago
[-]
The ads are just a symptom. The tsunami of money pouring in was the corrosive force. Funny enough - I remain hopeful on AI as a skill multiplier. I think that’ll be hugely empowering for the real doers with the concrete skill sets to create good software that people actually want to use. I hope we see a new generation of engineer-entrepreneurs that opt to bootstrap over predatory VCs. I’d rather we see a million vibrant small software businesses employing a dozen people over more “unicorns”.
reply
tekne
43 minutes ago
[-]
It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.

But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.

reply
SpicyLemonZest
28 minutes ago
[-]
> It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.

Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.

reply
recursive
13 minutes ago
[-]
> Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want.

But they'd prefer if it was shorts.

reply
phil21
51 minutes ago
[-]
>The "something" that happened was ads. They poisoned all the fun and interest out of technology.

Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.

Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.

It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.

reply
Blackthorn
42 minutes ago
[-]
I don't think that just because something becomes accessible to a lot more people that it devalues the experience.

To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!

* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.

reply
dude250711
59 minutes ago
[-]
The folks who love command-line and terminals had not been luddites all this time?
reply
grayhatter
24 minutes ago
[-]
lol, no. They're people who think faster. Someone who uses vscode will never produce code faster than someone proficient in vim. Someone who clicks through GUI windows will never be able to control their computer as fast as someone with a command prompt.

I'm sure that there are some examples who enjoy it for the interface. I think CRT term/emulator is peak aesthetic. And a few who aren't willing to invest the time to use a gui an a terminal, and they learned the terminal first.

Calling either group a luddite is stupid, but if I was forced to defend one side. Given most people start with a gui because it's so much easier. I'd rather make the argument that those who never progress onto the faster more powerful options deserve the insult of luddite.

reply
paganel
30 minutes ago
[-]
That's because AI-generated memes are lame, not saying that memes are smart, generally speaking, but the AI-generated ones are even lamer. And nothing wrong with being a luddite, to the contrary, in this day and age still thinking that technology is the way forward no matter what is nothing short of criminal.
reply
otabdeveloper4
1 hour ago
[-]
> is absolutely filled with busy work that no one really wants to do

Well, LLMs don't fix that problem.

(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)

reply
bgwalter
1 hour ago
[-]
Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.

Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.

It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.

Please do no co-opt the term "hacker".

reply
forgetfulness
1 hour ago
[-]
Hackers never had a very cohesive and consistent ideology or moral framework, we heard non stop of the exploits of people funded as part of Cold War military pork projects that got the plug pulled eventually, but some antipathy and mistrust of the powerful and belief in the power of knowledge were recurrent themes nonetheless

So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?

It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces

reply
amarant
42 minutes ago
[-]
The problem here isn't resisting those forces, that's all well and good.

The problem is the vast masses falling under Turing's Law:

"Any person who posts a sufficiently long text online will be mistaken for an AI."

Not usually in good faith however.

reply
forgetfulness
25 minutes ago
[-]
I don’t know how we’ll fix it

Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop

How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?

Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith

It’s the same problem with 9000 slop PRs submitted for code review

reply
amarant
4 minutes ago
[-]
I've seen it happen to short, well written articles. Just yesterday there was an article that discussed the authors experiences maintaining his FOSS project after getting a fair number of users, and if course someone in the HN comments claimed it was written by AI, even though there were zero indications it was, and plenty of indications it wasn't.

Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.

If we can't respect genuine content creators, why would anyone ever create genuine content?

I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.

The blanket bombing of "AI slop!" comments is counterproductive.

It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.

reply
FuriouslyAdrift
1 hour ago
[-]
VSCodium is the open source "clean" build of VS Code without all the Microsoft telemetry and under MIT license.

https://vscodium.com/

reply
Aurornis
1 hour ago
[-]
> Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.

A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.

Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.

Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.

> Please do no co-opt the term "hacker".

Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.

In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.

reply
phil21
53 minutes ago
[-]
> To me, hacker is about results

Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.

It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.

Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.

reply
Aurornis
45 minutes ago
[-]
> but that was the "scene" to me back in the day.

Same. Back then, and even now, the people who were busy criticizing other people for using the wrong programming language, text editor, or operating system were a different set of people than the ones actually delivering results.

In a way it was like hacker fashion: These people knew what was hot and what was not. They ran the right window manager on the right hardware and had the right text editor and their shell was tricked out. They knew what to sneer at and what to criticize for fashion points. But actually accomplishing things was, and still is, orthogonal to being fashionable.

reply
DennisP
28 minutes ago
[-]
To wit: my brother has never worked as a developer and has just a limited knowledge of python. In the past few days, he's designed, vibe-coded, and deployed a four-player online chess game, in about four hours of actual work, using Google's Antigravity. I looked at the code when it was partly done, and it was pretty good.

The gatekeepers wouldn't consider him a hacker, but that's kinda what he is now.

reply
mattgreenrocks
44 minutes ago
[-]
Ideological purity is a crutch for those that can't hack it. :)

I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.

Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.

reply
billy99k
13 minutes ago
[-]
Being anti-ai means you want to conserve the old ways in favor of new technology. Hardly what I would call 'progressive'.
reply
lxgr
1 hour ago
[-]
> hackers have always rejected bloated tools [...] Hackers have historically derided any website generators

Ah yes, true hackers would never, say, build a Debian package...

Managing complexity has always been part of the game. To a very large extent it is the game.

Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.

> Please do no co-opt the term "hacker".

Indeed, please don't. And leave my true scotsman alone while we're at it!

reply
bgwalter
1 hour ago
[-]
Local alternatives don't work, and you know that.
reply
brendoelfrendo
28 minutes ago
[-]
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality.

I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.

> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.

I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.

> These people bring way more toxicity to daily life than who they wage their campaigns against.

I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.

reply
embedding-shape
4 hours ago
[-]
> And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible.

I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.

What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).

And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.

What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?

reply
swatcoder
2 hours ago
[-]
The "devaluation" they mention is just the correction against the absurd ZIRP bump, that lured would-be doctors and lawyers into tech jobs at FAANG and FAANG-alike firms with the promise of upper middle class lifestyles for trivially weaving together API calls and jockeying JIRA tickets. You didn't have to spend years more in grad school, you didn't have to be a diligent engineer. You just had to had to have a knack for standardized tests (Leetcode) and the time to grid some prep.

The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.

You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.

reply
j4coh
1 hour ago
[-]
I don't believe companies can reliably tell expert and non-expert developers apart, to sort them so efficiently to play out like you say.
reply
kelseyfrog
35 minutes ago
[-]
The companies that can will remain and the companies that can't will perish. Not necessarily quickly nor gracefully, but the market stops all bucks.
reply
marcosdumay
19 minutes ago
[-]
You are assuming no government intervention or anti-competitive measures.

Neither has been true for a really long time.

reply
mkoubaa
17 minutes ago
[-]
If the market is allowed to behave like one*
reply
bavell
56 minutes ago
[-]
Nailed it. It's a pendulum and we're swinging back to baseline. We just finished our last big swing (zirp, post COVID dev glut) and are now in full free fall.
reply
abraxas
1 hour ago
[-]
You sound exactly like that turkey from Nassim Taleb's books that came to the conclusion that the purpose of human beings is to make turkeys very happy with lots of food and breeding opportunities. And the turkey's thesis gets validated perfectly every day he wakes up to a delicious fatty meal.

Until Thanksgiving.

reply
ragazzina
49 minutes ago
[-]
The turkey story predates Nassim Taleb books by decades.
reply
lxgr
58 minutes ago
[-]
Unlike the turkeys, they seem rather self aware about it.
reply
thunky
3 hours ago
[-]
> What exactly is being de-valuated for a profession

You're probably fine as a more senior dev...for now.

But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.

Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.

reply
spicyusername
3 hours ago
[-]
> assign work to an LLM

This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.

Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.

LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.

reply
neom
2 hours ago
[-]
I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.
reply
cjbgkagh
2 hours ago
[-]
Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.
reply
scarface_74
1 hour ago
[-]
Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people.

On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.

I would have actually slowed him down.

reply
vladimirralev
2 hours ago
[-]
Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.
reply
chud37
2 hours ago
[-]
Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.

I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.

reply
raw_anon_1111
1 hour ago
[-]
Well your anecdote is clearly at odds with absolutely all of the macro economic data.
reply
queenkjuul
3 hours ago
[-]
You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)
reply
carrychains
2 hours ago
[-]
It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.

But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.

reply
chrisweekly
2 hours ago
[-]
LM?
reply
knollimar
46 minutes ago
[-]
They mean LLM
reply
grumbel
3 hours ago
[-]
> This is just not happening anywhere around me.

Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.

And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.

reply
danaris
2 hours ago
[-]
> Don't worry about where AI is today, worry about where it will be in 5-10 years.

And where will it be in 5-10 years?

Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".

Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.

If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.

reply
CuriouslyC
2 hours ago
[-]
Right about where it is today with better integrations?

One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.

The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.

reply
cpt_sobel
34 minutes ago
[-]
> We're already committed to ~3 years of the current trajectory

How do you mean committed?

reply
catlifeonmars
1 hour ago
[-]
You buried the lede with “exponential capex scaling”. How is this technology not like oil extraction?

The bulk of that capex is chips, and those chips are straight up depreciating assets.

reply
CuriouslyC
27 minutes ago
[-]
The depreciation schedule is debatable (and that's currently a big issue!). We've been depreciating based on availability of next generation chips rather than useful life, but I've seen 8 year old research clusters with low replacement rates. If we stop spending on infra now, that would still give us an engine well into the next decade.
reply
vrighter
39 minutes ago
[-]
better integrations won't do anything to fix the fact that these tools are, by their mathematical nature, unreliable and always will be
reply
lupire
1 hour ago
[-]
It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.
reply
enraged_camel
1 hour ago
[-]
I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.
reply
marcosdumay
16 minutes ago
[-]
Rest assured that LLMs are completely incapable of replacing mildly competent junior developers. And that's fundamental to the technology.

Instead, it's them that benefit the most from using them.

It's only management that believes otherwise. Because of deceitful marketing from a few big corporations.

reply
Aldipower
16 minutes ago
[-]
> assign work to a LLM

What are your talking about? You seem to live in a parallel universe. Every single time I tried this or someone of my colleagues, this task failed tremendously hard.

reply
HarHarVeryFunny
2 hours ago
[-]
> But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.

This sounds kind of logical, but really isn't.

In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.

In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.

Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.

reply
enraged_camel
1 hour ago
[-]
>> e.g. coming back to you and asking for help

Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.

reply
HarHarVeryFunny
1 hour ago
[-]
Yes, they continue to get better, but they are not at human level (and jr devs are humans too) yet, and I doubt the next level "AGI" that people like Demis Hassabis are projecting to still be 10 years away will be human level either.
reply
lupire
1 hour ago
[-]
Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things
reply
HarHarVeryFunny
1 hour ago
[-]
Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).

We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.

If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.

reply
walt_grata
2 hours ago
[-]
LLMs vs human

Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway.

Claude doesn't get better as I explain concepts to it the same way a jr engineer does.

reply
cjbgkagh
1 hour ago
[-]
I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can.
reply
eithed
1 hour ago
[-]
But this is an issue of worker you're hiring. I've worked with senior engineers who a) did nothing (as - really not write any thing within the sprint, nor do any other work) b) worked on things they wanted to work on c) did ONLY things that they were assigned in the sprint (= if there were 10 tickets in the sprint and they were assigned 1 of these tickets then they would finish that ticket and not pick up anything else, staying quiet) d) worked only on tickets that have requirements explicitly stated step by step (open file a, change line 89 to be `checkBar` instead of `checkFoo`... - having to write this would take longer than doing the changes yourself as I was really writing in Jira ticket what I wanted the engineer to code, otherwise they would come back with "not enough spec, can't proceed"). All of these cases - senior people!

Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")

reply
cjbgkagh
22 minutes ago
[-]
Sure there is a wide spectrum of skills, having worked in FANG and top tier research I have a pretty good idea of the capability at the top of the spectrum. I know I wasn't hiring at that level. I was paying 2x the local market rate (non-US) and pulling from the functional programming talent pool. These were not the top 1% but I think they were easily top 10% and probably in the top 5%.

I use LLMs to build isolated components and I do the work needed to specialize them for my tasks and integrate them together. The LLMs take fewer instructions to do this and handle ambiguity far better. Additionally because of the immediate feedback look on the specs I can try first with a minimally defined spec and interactively refine as needed. It takes me far less work to write specs for LLMs than it does for other devs.

reply
raw_anon_1111
1 hour ago
[-]
If you are a “senior” engineer who is doing nothing but pulling well defined Jira tickets off the board, you’re horribly mis titled.
reply
mkoubaa
14 minutes ago
[-]
You need to hit that thumbs down with the explanation so the model is trained with the penalty applied. Otherwise your explanations are not in the training corpus
reply
sebasvisser
2 hours ago
[-]
Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself.
reply
lupire
1 hour ago
[-]
Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows.
reply
xtiansimon
3 hours ago
[-]
> “…exploiting our employer's lack of information…”

I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.

reply
singpolyma3
2 hours ago
[-]
If one is a junior the goal is to become a senior though. Not to remain a junior.
reply
solids
2 hours ago
[-]
Yes, but the barrier to become a senior is what’s currently in dispute
reply
luxuryballs
34 minutes ago
[-]
It just makes you more powerful, not less. When we got rid of rooms full of typewriters it’s because we became more productive, not less.
reply
vrighter
42 minutes ago
[-]
provided the senior dev takes time off to review that slop.
reply
JeremyNT
1 hour ago
[-]
> And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?

It's not to help make developers more efficient with code assistants.

Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.

If you can't see this coming you lack both imagination and historical perspective.

Five years ago Claude Code would have been essentially unimaginable. Consider this.

So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.

reply
allturtles
1 hour ago
[-]
I agree with all this, except there is no plan B. What could plan B possibly be when white collar work collapses? You can go into a trade, but who will be hiring the tradespeople?
reply
Gagarin1917
1 hour ago
[-]
The companies who now have piles of cash because they eliminated a huge chunk of labor will spend far more on new projects, many of which will require tradesmen.

Economic waves never hit one sector and stop. The waves continues across the entire economy. You can’t think “companies will get rid of huge amounts of labor” and then stop asking questions. You need to then ask “what will companies do with decreased labor costs?” And “what could that investment look like, who will they need to hit to fulfill it?” and then “what will those workers do after their demand increases?” And so on.

reply
allturtles
45 minutes ago
[-]
I would look at the secondary consequences of the totaling of white collar labor in the same way. Without the upper-middle-class spending their disposable income, consumer spending shrivels, advertising dollars dry up, and investment in growth no longer makes sense in most industries. It looks like a path to total economic destruction to me.
reply
aishsh
1 hour ago
[-]
I think it’s much more likely they’ll be used for mass surveillance purposes. The tech is already there, they just need the compute (and a lot of it).

Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.

reply
acedTrex
35 minutes ago
[-]
I consider the devaluation of the craft to be completely independent from the professional occupation of software.

Programming has been devalued because more people can do it at a basic level with LLM tooling. People that I do not consider smart enough or to have put enough work in to output the things that they have nor do they really understand it themselves.

It is of course the new reality and now we all have to go find new markers/things to judge peoples output by. Thats the devaluation of the craft itself.

For what its worth, this devaluation has happened many times in this field. ASM, Compilers, managed gc languages, the cloud, abstractions have continually opened up the field to people the old timers consider unworthy.

LLMs are a unique twist on that standard pattern.

reply
csmantle
4 hours ago
[-]
> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it

Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.

reply
llm_nerd
22 minutes ago
[-]
"Not in where I live though"

I mean, not anywhere. Their comment has little correlation with reality, and seems to be a contrived, self-comforting fiction. Most firms have implemented hiring freezes if not actively downsizing their dev staff. Many extremely experienced devs are finding the market absolutely atrocious, getting zero bites.

And for all of the "well us senior devs are safe" sentiment often seen on here, many shops seem to be more comfortable hiring cheap and eager junior devs and foregoing seniors because LLMs fill in a lot of the "grizzled wisdom". The junior to senior ratio is rapidly increasing, and devs who lived on golden handshakes are suddenly finding their ego bruised and a market where they're fighting for low-pay jobs.

reply
embedding-shape
4 hours ago
[-]
Again, compare this to other professions, don't look at in isolation, and you'll see why you're still (or will have, seems you're a student still) having a much more pleasant life than others.
reply
tdeck
2 hours ago
[-]
This is completely irrelevant. The point is that the profession is being devalued, i.e. losing value relative to where it was. If, for example, the US dollar loses value, it's not a "counterargument" to point out that it's still much more valuable than the Zimbabwe dollar.
reply
ramon156
2 hours ago
[-]
Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done

I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default

reply
MattRix
3 hours ago
[-]
This “compare it to other professions” thing doesn’t really work when those other professions are not the one you actually do. The idea that someone should never be miserable in their job because other more miserable jobs exist is not realistic.
reply
embedding-shape
2 hours ago
[-]
It's a useful thing to look at when you feel like all hope is lost and "wow is so difficult being a programmer" strikes, because it'll make you realize how easy you have it compared to non-programmers/nom-tech people.
reply
MattRix
2 hours ago
[-]
Realizing how supposedly “easy” you have it compared to other people is not as encouraging or motivational as you’re implying it is. And how “easy” do you have it if you can’t find a job in your field?
reply
Mashimo
4 hours ago
[-]
> What exactly is being de-valuated for a profession that seems to be continuously growing

A lot of newly skilled job applicants can't find anything in the job market right now.

reply
DebtDeflation
2 hours ago
[-]
Likewise with experienced devs who find themselves out of work due to the neverending mass layoffs.

There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.

reply
throwawaysleep
34 seconds ago
[-]
"Newly skilled" means needs supervision. If you have to supervise the work, then an AI is definitely superior.
reply
embedding-shape
4 hours ago
[-]
Based on conversations with peers for the last ~3 years or so, some of retrained to become programmers, this doesn't seem to as absolute as you paint it out to be.

But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.

reply
DJBunnies
4 hours ago
[-]
Nah it's pretty bad, but congrats on being an outlier.
reply
embedding-shape
4 hours ago
[-]
Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.

If you don't trust me, give a non-programming job a try for 1 year and then come back and tell me how much more comfy $JOB was :)

reply
RHSeeger
2 hours ago
[-]
> Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.

This is a ridiculous statement. I know plenty of people (that are not developers) that make around the same as I do and enjoy their work as much as I do. Yes, software development is a great field to be in, but there's plenty of others that are just as good.

reply
kamaal
3 hours ago
[-]
>>Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.

A lot of non-programmer jobs have a kind of union protection, pension plans and other perks even with health care. That makes a crappy salary and work environment bearable.

There was this VP of HR, in a Indian outsourcing firm, and she something to the effect that Software jobs appear like would pay to the moon, have an employee generate tremendous value for the company and general appeal that only smart people work these jobs. None of this happens with the majority of the people. So after 10-15 years you actually kind of begin to see why some one might want to work a manufacturing job.

Life is long, job guarantee, pensions etc matter far more than 'move fast and break thing' glory as you age.

reply
queenkjuul
2 hours ago
[-]
I was a lot happier in previous non-programming jobs, they just were much worse at paying the bills. If i could make my programming salary doing either of my previous jobs, i would go back in a heartbeat. Hell if i could make even 60% of my programming salary doing those jobs I'd go back.

I enjoy the practice of programming well enough but i do not at all love it as a career. I don't hate it by any means either but it's far from my first choice in terms of career.

reply
nake89
3 hours ago
[-]
> give a non-programming job a try for 1 year

I have a mortgage, 3 kids and a wife to support. So no. I don't think I'm going to do that. Also, I like my programming job.

EDIT: Sorry I thought you were saying the opposite. Didn't realize you were the OP of this thread.

reply
raincole
4 hours ago
[-]
Because tech corps overhired[0] when the interest rate was low.

Even after the layoffs, most big tech corps still have more employees today than they did in 2020.

The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.

[0]: https://www.dw.com/en/could-layoffs-in-tech-jobs-spread-to-r...

reply
IAmBroom
2 hours ago
[-]
I think it was more sandbagging than snowballing. The pain was spread out, and mostly delayed, which kept the economy moving despite everything.

Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.

Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.

But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.

reply
Hendrikto
2 hours ago
[-]
It seems like most companies are just using AI as a convenient cover for layoffs. If you say: “We enormously over-hired and have to do layoffs.”, your stock tanks. If you instead say that you are laying off the same 20k employees ‘because AI’, your stock pumps for no reason. It’s just framing.
reply
phkahler
1 hour ago
[-]
>> A lot of newly skilled job applicants can't find anything in the job market right now.

That is not unique to programming or tech generally. The overall US job market is kind of shit right now.

reply
spicyusername
3 hours ago
[-]
100% my experience as well.

Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.

My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.

reply
evnp
2 minutes ago
[-]
Exactly. The problem is instead of getting a raise because "you can do more now" your colleagues will be laid off. Why pay for 3 devs when the work can be done by 1 now? And we all better hope that actually pans out in whatever legacy codebase we're dealing with.

Now the job market is flooded due to layoffs, further justifying lack of comp adjustment - add inflation, and you have "de-valuing" in direct form.

reply
m_a_g
4 hours ago
[-]
> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)

In my Big Tech job, I sometimes forget that some people can really enjoy what they do. It seems like you're in a fortunate position of both high pay and high enjoyment. Congratulations! Out of curiosity, what do you work on?

reply
embedding-shape
4 hours ago
[-]
Right now I'm doing consulting for two companies, maybe a couple of hours per week, mostly having downtime and trying to expand on my machine learning knowledge.

But in general, every job I've had has been "high pay and high enjoyment" even when I initially had "shit pay" compared to other programmers, and the product wasn't really fun, I was still programming, an activity I still love.

Compare this to the jobs I did before, where the physical toll makes it impossible to do anything after work as you're exhausted, and even if I got paid more than my first programming job, that your body is literally unable to move once you get home, makes the pay matter less and feel less.

But for a programmer, you can literally sit still all day, have some meetings in a warm office, talk with some people, type some things into a document, sit and think for a while, and in the end of the month you get a paycheck.

If you never worked in another profession, I think you ("The Programmer") don't realize how lucky you are compared to the rest of the world.

reply
matwood
1 hour ago
[-]
It's a good perspective to keep. I've also worked a lot of crappy jobs. Overnights in a grocery store (IIRC, they paid an extra .50/hour to work overnights), fine dining waiter (this one was actually fun, but the partying was too much), on a landscaping crew, etc... I make more money than I ever thought possible growing up. My dad still can't believe I have job 'playing on the computer' all day, though I mostly manage now.
reply
IAmBroom
2 hours ago
[-]
A useful viewpoint.

I too have worked in shit jobs. I too appreciate that I am currently in a 70F room of my house, wearing a T-shirt and comfy pants, and able to pet my doggos at will.

reply
RHSeeger
2 hours ago
[-]
Mental exhaustion is a thing, too.
reply
queenkjuul
2 hours ago
[-]
I work remote and i hate it, sitting all day is killing me, my 5 minute daily stand-up is nowhere near enough social interaction for a whole day's work. I've been looking for a role better suited to me for over a year, but the market is miserable.

I miss having jobs where at least a lot of the time i was moving around or working directly with other people. More than anything else i miss casual conversation with coworkers (which still happened with excruciating rarity even when i was doing most of my programming in an office).

I'm glad you love programming and find the career ideal. I don't mean to harp or whine, just pointing out your ideals aren't universal even amount programmers.

reply
mattbettinson
1 hour ago
[-]
Get a standing desk and a walking treadmill! It’s genuinely changed my life. I can focus easier, I get my steps in, and it feels like I did something that day.
reply
lopis
4 hours ago
[-]
The job of a programmer is, and has always been, 50% making our job obsolete (through various forms of automation) and 50% ensuring our job security (through various forms of abstraction).
reply
empath75
1 hour ago
[-]
Over the course of my career, probably 2/3rds of the roles I have had (as in my day to day work, not necessarily the title) just no longer exist, because people like me eliminated them. I personally was the last person that had a few of those jobs because I mostly automated them and got promoted and they didn't hire a replacement. It's not that they hired less people though, they just hired more people, paid them more money, and they focused on more valuable work.
reply
hacb
3 hours ago
[-]
It's absolutely not easy to find a new job in France, and more generally in Europe
reply
embedding-shape
2 hours ago
[-]
My experience and the ones I personally known, been in Western Europe, South America and Asia, and programmers I know have an easier time to find new jobs compared to other professions.

Don't get me wrong, it's a lot harder for new developers to enter the industry compared to a decade ago, even in Western Europe, but it's still way easier compared to the length people I know who aren't programmers or even in tech.

reply
IAmBroom
2 hours ago
[-]
That's a quantifiable claim. Using experience to "prove" it is inappropriate.

US data does back it up, though. The tech labor sector outperformed all others in the last 10 years. https://www.bls.gov/emp/tables/employment-by-major-industry-...

reply
kalaksi
4 hours ago
[-]
> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks...

I don't know what kind of work you do but this depends a lot on what kind of projects you work on

reply
embedding-shape
4 hours ago
[-]
Across ~10 jobs or so, mostly as a employee of 5-100 person companies, sometimes as a consultant, sometimes as a freelancer, but always with a comfy paycheck compared to any other career, and never as taxing (mental and physical) as the physical labor I did before I was a programmer, and that some of my peers are still doing.

Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.

reply
SkyeCA
2 hours ago
[-]
Comfortable and easy, but satisfying? I don't think so. I've had jobs that were objectively worse that I enjoyed more and that were better for my mental health.
reply
RHSeeger
2 hours ago
[-]
> never as taxing (mental and physical) as the physical labor I did before I was a programmer

I find it... very strange that you think software development is less mentally taxing than physical labor.

reply
kalaksi
4 hours ago
[-]
Sure, it's mostly comfy and well-paid. But like with physical labor, there are jobs/projects that are easy and not as taxing, and jobs that are harder and more taxing (in this case mentally).
reply
embedding-shape
4 hours ago
[-]
Yes, you'll end up in situations where peers/bosses/clients aren't the most pleasant, but compare that to any customer facing job, you'll quickly be able to shed those moments as countless people face those seldom situations on a daily basis. You can give it a try, work in a call center for a month, and you'll acquire more stress during that month than even the worst managed software project.
reply
kalaksi
3 hours ago
[-]
When I was younger, I worked doing sales and customer service at a mall. Mostly approaching people and trying to pitch a product. Didn't pay well, was very easy to get into and do, but I don't enjoy that kind of work (and many people don't enjoy programming and would actually hate it) and it was temporary anyway. I still feel like that was much easier, but more boring.
reply
etrautmann
2 hours ago
[-]
That sounds ideal! I used to be a field roboticist where we would program and deploy robots to Greenland and Antarctica. IMO the fieldwork helped balance the desk work pretty well and was incredibly enjoyable.
reply
conradfr
2 hours ago
[-]
I didn't enter this profession because I love reviewing code though.
reply
elif
2 hours ago
[-]
Then use better software engineering paradigms in how your AI builds projects.

I find the more I specify about all the stuff I thought was hilariously pedantic hyper-analysis when I was in school, the less I have to interpret.

If you use test-driven, well-encapsulated object oriented programming in an idiomatic form for your language/framework, all you really end up needing to review is "are these tests really testing everything they should."

reply
roxolotl
4 hours ago
[-]
I came here to quote the same quote but with the opposite sentiment. If you look at the history of work, at least in the states, it’s a history of almost continual devaluation and automation. I’ve been assuming that my generation, entering the profession in the 2010s, will be the last where it’s a pathway to an upper middle class life. Just like the factory workers before us automation will come for those who do mostly repetitive tasks. Sure there will be well paid professional software devs in the future just as there are some well paid factory workers who mostly maintain machines. But the scale of the opportunity will be much smaller.
reply
embedding-shape
4 hours ago
[-]
But in the end, we didn't end up with less factories that do more, we ended up with more factories that does more.

Why wouldn't the same happen here? Instead of these programmers jamming out boilerplate 24/7, why are they unable to improve their skill further and move with the rest of the industry, if that's needed? Just like other professions adopt to how society is shaped, why should programming be an exception to that?

reply
overflow897
2 hours ago
[-]
And how is the quality of life for those factory workers? It's almost like the craft of making physical things has been devalued even if we're making more physical things than ever.
reply
syllogism
1 hour ago
[-]
Software to date has been a [Jevons good](https://en.wikipedia.org/wiki/Jevons_paradox). Demand for software has been constrained by the cost efficiency and risk of software projects. Productivity improvements in software engineering have resulted in higher demand for software, not less, because each improvement in productivity unblocks more of the backlog of projects that weren't cost effective before.

There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.

If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.

What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.

reply
ewzimm
1 hour ago
[-]
What is devalued is traditional labor-based ideology. The blog references Marx's theory of alienation. The Marxist labor theory of value, that the value of anything is determined by the labor that creates it, gives the working class moral authority over the owner class. When labor is reduced, the basis of socialist revolution is devalued, as the working class no longer can claim superior contributions to value creation.

If one doesn't subscribe to traditional Marxist ideology, this argument won't land the same way, but elements of these ideas have made their way into popular ideas of value.

reply
i_love_retros
2 hours ago
[-]
> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy

Eh?

I'm happy for you (and envious), because that is not my experience. The job is hard. Agile's constant fortnightly deadlines, a complete lack of respect by the rest of the stakeholders for the work developers do (even more so now because "ai can do that"), changing requirements but an expectation to welcome changing requirements because that is agile, incredibly egotistical assholes that seem to gravitate to engineering manager roles, and a job market that's been dead for a few years now.

No doubt some will comment and say that if I think my job is hard I should compare it to a coal miner in the 1940's. True, but as Neil Young sang: "Though my problems are meaningless, that don't make them go away."

reply
Xenoamorphous
3 hours ago
[-]
> the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world

Are you sure about that?

reply
embedding-shape
3 hours ago
[-]
No, I'm just pulling anecdotes out of my ass/am hallucinating.

Is there something specific you'd like to point me to, besides just replying with a soundbite?

reply
RHSeeger
2 hours ago
[-]
Admittedly, there's the responses in this thread with people saying "I'm in <some country that isn't the US> and the market here is bad, too".
reply
llm_nerd
21 minutes ago
[-]
How about you tell us where the market is good for devs? It is heinous in Canada and all of Europe that I'm aware of.

Are you in China? India?

reply
IAmBroom
2 hours ago
[-]
Data.
reply
monegator
2 hours ago
[-]
> What exactly is being de-valuated We are being second guessed by any sub organism with little brain, but opposable thumbs, at a rate much greater than before, because now the sub organism can simply ask the LLM to type their arguments for them. How many times have you received screenshots of an LLM output yesanding whatever bizarre request you already tried to explain and dismiss as not possible/feasible/unnecessary? the sub organism has delegated their thoughts to the LLM and i always find that extremely infuriating, because all i want to do is to shake that organism and cry "why don't you get it? think! THINK! THINK FOR YOURSELF FOR JUST A SECOND"

Also, i enjoy programming. Even typing boring shit as boilerplate because i keep my brain engaged. As much as i type i keep thinking, is this really necessary? and maybe figure out something leaner. LLMs want to deprive me of enjoyment of my work (research, learn) and of my brain. No thanks, no LLM for me. And i don't care whatever garbage it outputs, i'd much prefere if the garbage was your output, or you are useless.

The only use i have for LLMs and diffusion models is to entertain myself with stupid bullshit i come up with that i would find funny. I massively enjoy projects such as https://dumbassideas.com/

Note: Not taking into account the "classic" ML uses, my rant only going to LLMs and the LLM craze. A tool made by grifters, for grifters.

reply
ulfw
4 hours ago
[-]
> What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)

You do realise your position of luck is not normal, right? This is not how your average Techie 2025 is.

reply
lopis
4 hours ago
[-]
Specially for new developers. Entry level jobs have practically evaporated.
reply
ishouldbework
4 hours ago
[-]
Well, speaking just for central Europe, it is pretty average. Sure, entry-level positions are different story, but anyone with at least few years for work experience can find reasonably payed job fairly quickly.
reply
IAmBroom
2 hours ago
[-]
Others in Europe in this thread contradict your belief.

Actual data is convincing; few are providing it.

reply
embedding-shape
4 hours ago
[-]
I don't know what "position of luck" you're talking about, it's been dedicated effort to practice programming and suffer through a lot of shit until I got my first comfy programming job.

And even if I'm experienced now, I still have peers and acquaintances who are getting into the industry, I'm not sitting in my office with my eyes closed exactly.

reply
Aeolun
4 hours ago
[-]
That’s probably because the definition of ‘average techie’ has been on a rapid downward trajectory for years? You can justify the waste when money is free. Not when you need them to do something.
reply
Glemkloksdjf
4 hours ago
[-]
Every good 'techie' around me has it good.
reply
tester756
2 hours ago
[-]
>the job is easy

software engineering is easy? you live in bubble, try teaching programming to someone new to it and you'll realize how muuuuch effort it requires

reply
amrocha
4 hours ago
[-]
There’s been over 1 million people laid off in tech in the past 4 years

https://www.trueup.io/layoffs

reply
svantana
2 hours ago
[-]
According to that site, there were more tech layoffs in 2022 than in 2024 or 2025. Doesn't that speak against the "AI is taking tech jobs" hypothesis?
reply
hamdingers
27 minutes ago
[-]
Massive, embarrassingly shortsighted overhiring in 2020 and 2021 seems like the more likely culprit.
reply
embedding-shape
4 hours ago
[-]
Again, sucks to be in the US as a programmer today maybe, but this isn't true elsewhere in the world, and especially not if you already have at least some experience.
reply
lm28469
4 hours ago
[-]
Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.

Of course if you're in south eastern europe or in south asia where all the jobs are being offshored you're having the time of your life.

reply
embedding-shape
4 hours ago
[-]
> Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.

I don't know what else to say except that hasn't been my experience personally, nor the experience of my acquaintances who've re-skilled to become programmers these last few years, in Western Europe.

reply
lm28469
4 hours ago
[-]
Anecdotes are cool but we came up with a neat little thing known as statistics.

https://finance.yahoo.com/news/tech-job-postings-fall-across...

> Among the 27 countries analysed, European nations saw the steepest fall in tech job postings between 1 February 2020 and 31 October 2025,

> In absolute terms, the decline exceeded 40% in Switzerland (-46%) and the UK (-41%), with France (-39%) close behind.

> The United States showed a similar trend, with a decline of 35%. Austria (-34%), Sweden (-32%) and Germany (-30%) were also at comparable levels.

reply
amrocha
4 hours ago
[-]
Do you base your entire worldview purely on your own personal experience?
reply
embedding-shape
4 hours ago
[-]
Do you suffer from reading comprehension issues?
reply
amrocha
3 hours ago
[-]
It’s ok to admit that you were wrong. Your experience is good, but the industry is doing very poorly right now. I showed you data to back that up. Someone else posted data about Europe.

Don’t close your eyes and plug your ears and pretend you didn’t hear anything.

reply
MattRix
3 hours ago
[-]
You seem to keep having to add more and more qualifiers to your statements…
reply
nosianu
2 hours ago
[-]
I only see one. "Outside the US" was the starting proposition, then they only added "experienced".
reply
spicyusername
3 hours ago
[-]
This has more to do with monetary policy than AI, though.
reply
empath75
1 hour ago
[-]
I have been laid off 4 times. Tech has a lot of churn, there are a lot of high risk high reward companies.
reply
mschuster91
4 hours ago
[-]
> What are they talking about? What is this "devaluation"?

I'm not paid enough to clean up shit after an AI. Behind an intern or junior? Sure, I enjoy that because I can tell them how shit works, where they went off the rails, and I can be sure they will not repeat that mistake and be better programmers afterwards.

But an AI? Oh good luck with that and good luck dealing with the "updates" that get forced upon you. Fuck all of that, I'm out.

reply
flir
4 hours ago
[-]
> I'm not paid enough to clean up shit after an AI.

I enjoy making things work better. I'm lucky in that, because there's always been more brownfield work than greenfield work. I think of it as being an editor, not an author.

Hacking into vibe code with a machete is kinda fun.

reply
Mountain_Skies
2 hours ago
[-]
Your complete lack of empathy is going to be your undoing. Might want to check in on that.
reply
Trasmatta
1 hour ago
[-]
> What is this "devaluation"?

The part where writing performant, readable, resilient, extensible, and pleasing code used to actually be a valued part of the craft? I feel like I'm being gaslit after decades of being lectured on how to be a better software developer, only to be told that my craft is pointless, the only thing of value is the output, and that I should be happy spending my day babysitting agents and reviewing AI code slop.

reply
queenkjuul
3 hours ago
[-]
You clearly haven't tried looking for a job in the last two years have you
reply
GlacierFox
4 hours ago
[-]
Are we living on the same planet?
reply
embedding-shape
4 hours ago
[-]
Considering we surely have wildly different experiences and contexts, you could almost say we live on the same planet, but it looks very different to each and one of us :)
reply
belter
4 hours ago
[-]
No... :-)
reply
zombot
4 hours ago
[-]
I'd love to live on the same planet you do.
reply
embedding-shape
4 hours ago
[-]
Gained life experience is always possible, regardless of your age :) Give other professions a try, and see the difference for yourself.
reply
gradus_ad
1 minute ago
[-]
Maybe because I came into software not from an interest in software itself but from wanting to build things, I can't relate to the anti-LLM attitude. The danger in becoming a "crafter" rather than a "builder" is you lose the forest for the trees. You become more interested in the craft for the craft's sake than for its ability to get you from point A to point B in the best way.

Not that there's anything wrong with crafting, but for those of us who just care about building things, LLM's are an absolute asset.

reply
TrackerFF
2 hours ago
[-]
I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.

But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

reply
Aurornis
1 hour ago
[-]
> Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

Honestly I think you’re swallowing some of the hype here.

I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.

The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.

The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.

I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.

reply
jvanderbot
49 minutes ago
[-]
It's not just "juniors". It's people who should know better turning out LLM junk outside their actual experience areas because "They are experienced enough to use LLMs".

There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.

reply
intended
6 minutes ago
[-]
I didnt read the parent comment as celebrating this state. More like they were decrying it, and the blindness of people who just run on metrics.
reply
davidmurdoch
1 hour ago
[-]
This just happened to me this week.

I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.

AI just can't cope with this yet. So my team has been told that we are too slow.

Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).

reply
grayhatter
11 minutes ago
[-]
> It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).

I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"

reply
BarryMilo
1 hour ago
[-]
As it was foretold since the beginning, IA use is breaking security wantonly.
reply
rho4
51 minutes ago
[-]
Ouch, so painful to read.
reply
syllogism
1 hour ago
[-]
I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.

It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.

The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.

LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.

reply
acedTrex
27 minutes ago
[-]
> It's really easy to use LLMs to shift work onto other people.

This is so incredibly true.

reply
arscan
1 hour ago
[-]
> It's really easy to use LLMs to shift work onto other people.

This is my biggest gripe with LLM use in practice.

reply
fileeditview
1 hour ago
[-]
The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over.

The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.

edit: sentence fixed.

reply
afro88
1 hour ago
[-]
I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable.

You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.

We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.

reply
fileeditview
44 minutes ago
[-]
While a general "crashing down" probably will not happen I could imagine some differences to other mass produced goods.

Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.

Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.

I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.

I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.

reply
kiba
1 hour ago
[-]
Poor quality is not synonymous with mass production. It's just cheap crap made with little care.
reply
lxgr
1 hour ago
[-]
> The era of software mass production has begun.

We've been in that era for at least two decades now. We just only now invented the steam engine.

> I wonder how long it takes until this comes all crashing down.

At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.

reply
knollimar
1 hour ago
[-]
There's a buge difference between possible and likely.

Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet

reply
lxgr
1 hour ago
[-]
Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it.

The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.

reply
vrighter
51 minutes ago
[-]
what is (and I'm being generous with the base here) 0.95^10? A 10-step process with a 95% success rate on each.
reply
pacifika
1 hour ago
[-]
Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.

Perhaps that’s what lead to a decline in accountability and quality.

reply
goldeneas
1 hour ago
[-]
> Bad engineering is possible with and without LLMs

That's obvious. It's a matter of which makes it more likely

reply
bluefirebrand
13 minutes ago
[-]
> Bad engineering is possible with and without LLMs.

Is Good Engineering possible with LLMs? I remain skeptical.

reply
carlosjobim
58 minutes ago
[-]
Why didn't programmers think of stepping down from their ivory towers and start making small apps which solve small problems? That people and businesses are very happy to pay for?

But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.

There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?

My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.

Instead of earning a very good living by making boutique software for paying users.

reply
fileeditview
40 minutes ago
[-]
I don't think programmers are the issue here. What you describe sounds to me more like the typical product management in a company. Stuff features into the thing until it bursts of bugs and is barely maintainable.

I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.

You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)

reply
cjs_ac
19 minutes ago
[-]
Solving these problems requires going outside and talking to people to find out what their problems are. Most programmers aren't willing to do that.
reply
atleastoptimal
1 hour ago
[-]
Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.
reply
bluefirebrand
12 minutes ago
[-]
If AI is making you more productive, then I doubt you were very productive pre-AI
reply
cyral
3 minutes ago
[-]
The denial/cope here is insane
reply
SpicyLemonZest
45 minutes ago
[-]
> If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.

Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.

reply
AndrewKemendo
1 hour ago
[-]
> If your org is blindly data/metric driven

Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?

reply
intothemild
1 hour ago
[-]
Most early stage startups I've been in weren't metric driven. It's impossible when everyone is just working as hard as they can to get it built, to suddenly slow down and start measuring everyone's output.

It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.

reply
layer8
55 minutes ago
[-]
“Blindly” is the operative word here.
reply
zwnow
1 hour ago
[-]
> You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper.

I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

reply
Aurornis
1 hour ago
[-]
> I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not.

You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.

I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.

There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.

> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.

reply
bgwalter
1 hour ago
[-]
Is that why open source progress has generally slowed down since 2023? We keep hearing these promises, and reality shows the opposite.
reply
Aurornis
1 hour ago
[-]
> Is that why open source progress has generally slowed down since 2023?

Citation needed for a clam of that magnitude.

reply
zwnow
1 hour ago
[-]
> But you have to actually learn how to use them.

This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.

We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.

reply
lxgr
59 minutes ago
[-]
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_.

I share that concern about massive, unforced centralization. If there were any evidence for the hypothesis that LLM inference would always remain viable in datacenters only, I'd be extremely concerned about their use too.

But from all I've seen, it seems overwhelmingly likely that we'll have very powerful ones in our phones in at most a few years, and definitely in midrange laptops and above.

reply
zwnow
29 minutes ago
[-]
I'd be all for it if its truly disconnected from big tech entities.
reply
Aurornis
1 hour ago
[-]
> This probably is the issue for me, I am simply not willing to do so.

Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.

> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.

It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.

The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.

I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.

reply
lxgr
2 hours ago
[-]
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?

Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us

What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?

Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.

Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.

If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...

reply
ceejayoz
1 hour ago
[-]
> Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

Who says we haven't tried it out?

reply
hamdingers
18 minutes ago
[-]
> I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment.

FTA.

I know tons of people where "tried it out" means they've seen Google's abysmal search summary feature, or merely seen the memes and read news articles about how it's wrong sometimes, and haven't explored any further.

reply
bluefirebrand
7 minutes ago
[-]
Personally I'm watching people I used to respect start to rely on AI more and more and their skills and knowledge are declining rapidly while their reliance is growing, so I'm really not interested in following that path

They seem just as enthusiastic as many of the pro AI voices here on HN, while the quality of their work declines. It makes me extremely skeptical of anyone who is enthusiastic about AI. It seems to me like it's a delusion machine

reply
lxgr
1 hour ago
[-]
Seems like various hackers came to various different conclusions from trying them out, then. Why feign surprise about this?
reply
ceejayoz
1 hour ago
[-]
Why not?

I was surprised how hard many here fell for the NFT thing, too.

reply
lxgr
1 hour ago
[-]
Please, not the old "AI is the new crypto" trope.

Various people have been wrong on various predictions in the past, and it seems to me that any implied strong overlap is anecdotal at best and wishful (why?) thinking at worst.

The only really embarrassing behavior is never updating your priors when your predictions are wrong. Also, if you're always right about all your prognoses, you should probably also not be in the HN comments but on a prediction market, on-chain or traditional :)

reply
AlexandrB
28 minutes ago
[-]
I'm sorry, but the idiocy that was crypto-hype can't be dismissed this easily. It's hard to make a prediction on AI because things are moving so fast and the technology is actually useful, so I wouldn't fault anyone for being wrong in retrospect. But when it comes to NFTs: if you bought into that stuff you are either a sucker or a scammer and in both cases your future opinions can be safely discarded.
reply
lxgr
12 minutes ago
[-]
> the idiocy that was crypto-hype can't be dismissed this easily.

Maybe so, but would it be possible to not dismiss it elsewhere? I just don't see the causal relation between AI and crypto, other than that both might be completely overhyped, world-changing, or boringly correctly estimated in their respective impact.

reply
spot5010
10 minutes ago
[-]
"I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment."

Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.

reply
jtrn
3 minutes ago
[-]
I feel AI now is good enough to follow the same pattern as with internet usage. The quality ranges from useless to awesome based on how you use it. Blanked statements that “it is terrible and uesless” reveals more about the person than the tech at this point.
reply
wowamit
2 minutes ago
[-]
AI is not one solution to all the problems in the world. But neither is it worthless. There's a proper balance to be had in knowing how useful AI is to an individual.

Sure, it can be overdone. But at the same time, it shouldn't be undersold.

reply
shlip
5 hours ago
[-]
> AI systems exist to reinforce and strengthen existing structures of power and violence.

Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.

[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...

reply
Glemkloksdjf
4 hours ago
[-]
If you are a software engineere, you can leverage AI a lot better to write code than anyone else.

The complexity of good code, is still complicated.

which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.

And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.

In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.

reply
countWSS
4 hours ago
[-]
Sounds like Manna control system: https://marshallbrain.com/manna
reply
Aeolun
4 hours ago
[-]
How is that different from making manual computation obsolete with the help of excel?
reply
flir
5 hours ago
[-]
Now apply that thinking to computers. Or levers.

I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.

The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.

I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?

(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).

reply
prmph
4 hours ago
[-]
Tools are not value neutral in any way.

In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.

On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.

But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.

So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.

reply
flir
4 hours ago
[-]
> A motorbike wants to be ridden that way

That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?

I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").

reply
idle_zealot
4 hours ago
[-]
> The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.

Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.

ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.

And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.

reply
thwarted
1 hour ago
[-]
"Automating agency" it's such a good way to describe what's happening. In the context of your last paragraph, if they succeed in creating AGI, they won't be able to exercise control over a robot army, because the robot army will have as much agency as humans do. So they will have created the very situation they currently find themselves in. Sans an economy.
reply
amrocha
4 hours ago
[-]
It’s a good thing that there’s centuries of philosophy on that subject and the general consensus is that no, tools are not “neutral” and do shape the systems they interact with, sometimes against the will of those wielding these tools.

See the nuclear bomb for an example.

reply
flir
4 hours ago
[-]
I'm actually thinking of Marshall McLuhan. Maybe you're right, and tools aren't neutral. Does this mean that computation necessitates inequality? That's an uncomfortable conclusion for people who identify as hackers.
reply
fennecfoxy
4 hours ago
[-]
Lmao Cory Doctorow. Desperately trying to coin another catchphrase again.
reply
lynx97
4 hours ago
[-]
I am surprised (and also kind of not) to see this kind of tech hate on HN of all places.

Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?

Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.

So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...

reply
seu
4 hours ago
[-]
> Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.

I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.

Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.

> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...

Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.

reply
Glemkloksdjf
4 hours ago
[-]
Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.

So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.

I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced

reply
trashb
1 hour ago
[-]
> Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.

There was progress in the Middle Ages, hence the difference between the early and late Middle Ages. Most information was mouth to mouth instead of written down.

"The term employs traditional light-versus-darkness imagery to contrast the era's supposed darkness (ignorance and error) with earlier and later periods of light (knowledge and understanding)."

"Others, however, have used the term to denote the relative scarcity of written records regarding at least the early part of the Middle Ages"

https://en.wikipedia.org/wiki/Dark_Ages_(historiography)

reply
lm28469
4 hours ago
[-]
> Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?

I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.

If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying

reply
andrepd
4 hours ago
[-]
"You don't like $instance_of_X? You must want to get rid of all $X" has got to be one of the most intellectually lazy things you could say.

You don't like leaded gasoline? You must want us to walk everywhere. Come on...

reply
lynx97
4 hours ago
[-]
A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether. Yes, tech has always had two sides. Our "job" as humans is to pick the good parts, and avoid the bad. Nothing new, nothing exceptional.
reply
lm28469
4 hours ago
[-]
> A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether.

Speaking of wonky analogies, have you considered that other people have access to these hammers and are aiming for your head ? And that some people might not want to be hit on the head by a hammer

reply
andrepd
3 hours ago
[-]
More lazy analogies... Yes a hammer is a tool, so is a machine gun, a nuke, or the guy with his killdozer. So what are you gonna do? Nothing to see here, discussion closed.

This is not an interesting conversation.

reply
fancyfredbot
10 minutes ago
[-]
I feel like in a sci-fi world with robots, teleportation and holodecks these people would decide to stay at home and hand wash the dishes.

If an amazing world changing technology like LLMs shows up on your doorstep and your response is to ignore it and write blog posts about how you don't care about it then you aren't curious and you aren't really a hacker.

reply
IshKebab
6 minutes ago
[-]
I don't touch dishwashers with a stick. No matter how well they work. I find it particularly disillusioning to realize how deep the dishwasher brainworm is able to eat itself even into progressive cleaning circles.

Edit: Ha I see you edited "empty the dishwasher" to "hand wash the dishes". My thoughts exactly.

There's no hope for these people.

reply
rho4
4 hours ago
[-]
And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.
reply
rsynnott
2 hours ago
[-]
Don't be the person refusing the 4GL/Segway/3D TV/NFT/Metaverse. Regularly give the new tool a chance and check if improvements are useful for specific tasks.

Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.

(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)

reply
aurareturn
1 hour ago
[-]

  Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
My relative came to me to make a small business website for her. She knew I was a "coder". She gave me a logo and what her small business does.

I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.

My relative was extremely happy with the result.

It took me about 10 minutes to do all of this.

In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.

I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.

reply
rsynnott
1 hour ago
[-]
Okay, I mean if that’s the sort of thing you regularly have to do, cool, it’s useful for that, maybe, I suppose? To be clear I’m not saying LLMs are totally useless.
reply
aurareturn
33 minutes ago
[-]
I don't have to do this regularly. You asked for a qualitative example. I just gave you one.
reply
cons0le
2 hours ago
[-]
I detest LLMs , but I want to point out that segway tech became the basis for EUCs , which are based https://youtu.be/Ze6HRKt3bCA?t=1117

These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.

https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...

reply
atonse
25 minutes ago
[-]
While that video looks cool from a "Red Bull Video of crazy people doing crazy things" type angle, that looks extremely dangerous for day to day use. You're one pothole or bad road debris away from a year in the hospital at best, or death at worst.

There is something to be said for the protective shell of a vehicle.

reply
catapart
1 hour ago
[-]
lol! I thought this was going to link to some kind of innovative mobility scooter or something. I was still going to say "oh, good; when someone uses the good parts of AI to build something different which is actually useful, I'll be all ears!", because that's all you would really have been advocating for if that was your example.

But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.

reply
Spivak
2 hours ago
[-]
I mean sure but none of these even claimed to help you do things you were already doing. If your job is writing code none of these help you do that.

That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.

reply
catapart
1 hour ago
[-]
Being honest, I don't know what a 4GL is. But the rest of them absolutely DID claim to help me do things I was already doing. And, actually, NFTs and the Metaverse even specifically claimed to be able to help with coding in various different flavors. It was mostly superficial bullshit, but... that's kind of the whole tech for those two things.

In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.

reply
rsynnott
1 hour ago
[-]
A 4GL is a "fourth generation language"; they were going to reduce the need for icky programmers back in the 70s. SQL is the only real survivor, assuming you're willing to accept that it counts at all. "This will make programmers obsolete" is kind of a recurrent form of magic tech; see 4GLs, 5GLs, the likes of Microsoft Access, the early noughties craze for drag-and-drop programming, 'no-code', and so forth. Even _COBOL_ was kind of originally marketed this way.
reply
empath75
1 hour ago
[-]
The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.
reply
skydhash
4 hours ago
[-]
If a calculator gives me 5 when I do 2+2, I throw it away.

If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.

If a mobile phone refuses to connect to a cellular tower, I get another one.

What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

reply
tokioyoyo
4 hours ago
[-]
You can have this position, but the reality is that the industry is accepting it and moving forward. Whether you’ll embrace some of it and utilize it to improve your workflow, is up to you. But over-exaggerating the problem to this point is kinda funny.
reply
crazygringo
2 hours ago
[-]
Honestly, LLMs are about as reliable as the rest of my tools are.

Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.

My regular software breaks constantly. All the time. It's a rare day where everything works as it should.

LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.

reply
candiddevmike
2 hours ago
[-]
Sorry you're being downvoted even though you're 100% correct. There are use cases where the poor LLM reliability is as good or better than the alternatives (like search/summarization), but arguing over whether LLMs are reliable is silly. And if you need reliability (or even consistency, maybe) for your use case, LLMs are not the right tool.
reply
AlexandrB
14 minutes ago
[-]
What I want from my tools is autonomy/control. LLMs raise the bar on being at the mercy of the vendor. Anything you can do with an LLM today can silently be removed or enshittified tomorrow, either for revenue or ideological reasons. The forums for Cursor are filled with people complaining about removed features and functional regressions.
reply
fennecfoxy
4 hours ago
[-]
Except it's more a case of "my phone won't teleport me to Hawaii sad faec lemme throw it out" than anything else.

There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.

Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

reply
skydhash
3 hours ago
[-]
> Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.

reply
AlexandrB
20 minutes ago
[-]
It's even worse, because even with an unreliable 60s car you could at least diagnose and repair the damn thing when it breaks (or hire someone to do so). LLMs can be silently, subtly wrong and there's not much you can do to detect it let alone fix it. You're at the mercy of the vendor.
reply
embedding-shape
4 hours ago
[-]
> What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.

But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.

Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.

reply
skydhash
3 hours ago
[-]
I’m not a native English speaker so I checked on the definition of reliability

  the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.

So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.

It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).

reply
cpburns2009
2 hours ago
[-]
I'm a native English speaker. Your understanding and usage of the word "reliability" is correct, and that's the exact word I'd use in this conversation. The GP is playing a pointless semantics game.
reply
RicoElectrico
3 hours ago
[-]
You're preaching to the wrong crowd I guess. Many people here think in extremes.
reply
0xEF
4 hours ago
[-]
I was once in your camp, thinking there was some sort of middle-ground to be had with the emergence of Generative AI and it's potential as a useful tool to help me do more work in less time, but I suppose the folks who opposed automated industrial machinery back in the day did the same.

The problem is that, historically speaking, you have two choices;

1. Resist as long as you can, risking being labeled a Luddite or whatever.

2. Acquiesce.

Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.

My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.

Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.

To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.

At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.

reply
senordevnyc
4 minutes ago
[-]
Maybe the dilemma isn’t whether to “resist” or “acquiesce”, but rather whether to frame technological change as an inherently adversarial and zero sum struggle, versus looking for opportunities to leverage those technologies for greater productivity, comfort, prosperity, etc. Stop pushing against the idea of change. It’s going to happen, and keep happening, forever. Work with it.

And by any metric, the average citizen of a developed country is wildly better off than a century or two ago. All those moments of change in the past that people wrung their hands over ultimately improved our lives, and this probably won’t be any different.

reply
1dom
1 minute ago
[-]
I think we can all agree AI is a bubble, and is over-hyped. I think we can ignore any pieces that say "AI is all bad" or "AI is all good" or "I've never used AI but...".

It's nuanced, can be abused, but can be beneficial when used responsibly in certain ways. It's a tool. It's a powerful tool, so treat it like a powerful tool: learn about it enough to safely use it in a way to improve your life and those around you.

Avoiding it completely whilst confidently berating it without experience is a position formed from fear, rather than knowledge or experience. I'm genuinely very surprised this article has so many points here.

reply
abbadadda
4 hours ago
[-]
I really enjoyed how your words made me _feel._ They encouraged me to "keep fighting the good fight" when it comes to avoiding social media, et. al.

I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.

At any rate, thank you for the thoughtful & eloquent words of caution.

reply
scld
1 hour ago
[-]
Doesn't Python weaken your thinking about how computers actually work?
reply
markbnj
1 hour ago
[-]
I think the dangers that LLMs pose to the ability of engineers to earn a living is overstated, while at the same time the superpowers that they hand us don't seem to get much discussion. When I was starting out in the 80's I had to prowl dial-up BBSs or order expensive books and manuals to find out how to do something. I once paid IBM $140 for a manual on the VGA interface so I could answer a question. The turn around time on that answer was a week or two. The other day I asked claude something similar to this: "when using github as an OIDC provider for authentication and assumption of an AWS IAM role the JWT token presented during role assumption may have a "context" field. Please list the possible values of this field and the repository events associated with them." I got back a multi-page answer complete with examples.

I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.

reply
skydhash
1 hour ago
[-]
I’m glad you listed the manual example. Usually when people are solving problems, they’re not asking the kind of super targeted question in you second example. Instead it’s an exploration. You read and target the next concept you need to understand. And if you do have this specific question, you want the surrounding context because you’ll likely have more questions after the first.

So what people do is collecting documentations. Give them a glance (or at least the TOC), the start the process to understand the concepts. Sure you can ask the escape code for setting a terminal title, but will it says that not all terminals support that code? Or that piping does not strip out escape codes? That’s the kind of gotchas you can learn from proper manuals.

reply
hvb2
1 hour ago
[-]
> So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder.

There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.

All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.

reply
pneumic
36 minutes ago
[-]
For me LLMs have been an incredible relief when it comes to software planning—quickly navigating the paralyzing quantity of choices when it comes to infrastructure, deployment, architecture and so on. Of course, this only highlights how crushingly complex it all is now, and I get a sinking feeling that instead of people solving technical complexity where it needs solving, these tools will be an abstraction layer over ever-rolling balls of mud that no one bothers to clean up anymore.
reply
tasty_freeze
27 minutes ago
[-]
I learned to code in the late 70s on computers using BASIC, then got into Z80 assembly language. Sure, the games were wrote back then were nothing like today's 10GB, $100M+ multi-year projects, but they were still extremely exciting because expectations were much lower back then.

Anyway, the point I'm getting to was it was glorious to understand what every bit of every register and every I/O register did. There were NO interposing layers of software that you didn't write yourself or didn't understand completely. I even wrote a disassembler for the BASIC ROM and spend many hours studying it so I could take advantage of useful subroutines. People even published books that had that all mapped out for you (something like "Secrets of the TRS-80 ROM Decoded").

Recently I have been helping a couple teenagers in my neighborhood learn Python a couple hours a week. After installing Python and going through the foundational syntax, you bet I had them write many of those same games. Even though it was ASCII monsters chasing their character on the screen, they loved it.

It was similar to this, except it was real-time with a larger playfield:

https://www.reddit.com/r/retrogaming/comments/1g6sd5q/way_ba...

reply
zajio1am
2 hours ago
[-]
> We programmers are currently living through the devaluation of our craft.

Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.

When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.

reply
rbongers
1 hour ago
[-]
I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.

I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.

I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.

reply
tylershuster
30 minutes ago
[-]
Exactly. Using them to actually “generate content” is a sure fire way to turn your brain into garbage, along with whatever you “produce” - but they do seem to have fulfilled Google’s dream of making the Star Trek computer reality.
reply
phplovesong
4 minutes ago
[-]
We will have decades of AI slop that needs to be cleaned up. Many startups will fail hard when the AI code bugs all creep up when a certain scale is reached. There will be massive dataloss, lots of hacking attempts that will succeed because of poor AI code no one understands. I dont see a dev staying in the same place for many years when its just soulless AI day in day out.

Either way its a lost cause.

reply
yathaid
58 minutes ago
[-]
>> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.

Without an explanation of what they author is calling out as flaws, it is hard to take this article seriously.

I know engineers I respect a ton who have gotten a bunch of productivity upgrades using "AI". My own learning curve has been to see Claude say "okay, these integration tests aren't working. Let me write unit tests instead" and go on when it wasn't able to fix a jest issue.

reply
layer8
40 minutes ago
[-]
The entire rest of the article consists of describing the problems. The problems aren’t about the technical abilities of AI.
reply
ErroneousBosh
3 hours ago
[-]
I recently had to write a simple web app to search through a database, but full-text searching wasn't quite cutting it. The underlying data was too inconsistent and the kind of things people would ask for would mean searching across five or six columns.

Just the job for an AI agent!

So what I did is this - I wrote the app in Django, because it's what I'm familiar with.

Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.

There we go. Very fast and natural searching that Does What You Mean (mostly).

No Artificial Intelligence.

All done with Organic Home-grown Brute Force and Ignorance.

Because that's sometimes just what you need.

reply
trashb
1 hour ago
[-]
In general using natural language to feed into AI to generate code to compile to runnable software seems like the long way around to designing a more usable programming language.

It seems that most people preferring natural language over programming languages don't want to learn the required programming language and ending up reinventing their own worse one.

There is a reason why we invented programming languages as an interface to instruct the machine and there is a reason why we don't use natural language.

reply
0xFEE1DEAD
2 hours ago
[-]
I honestly don't get vibe coding.

I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.

At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).

I don't understand how anyone can work like that and have confidence in their code.

reply
gavmor
2 hours ago
[-]
> I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).

Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.

You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.

The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.

LLMs may _appear_ to have a theory about a program, but this is an illusion.

To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.

reply
0xFEE1DEAD
1 hour ago
[-]
I agree with this take, but I'm wondering what vibe coders are doing differently?

Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?

Are they worry-free and just run with it?

Not asking rhetorically, I seriously want to know.

reply
AlexandrB
10 minutes ago
[-]
It's interesting that this is a similar criticism to what was levelled at Ruby on Rails back in the day. I think generating a bunch of code - whether through AI or a "framework" - always has the effect of obscuring the mental model of what's going on. Though at least with Rails there's a consistent output for a given input that can eventually be grokked.
reply
teaearlgraycold
1 hour ago
[-]
I recently made a few changes to a small personal web app using an LLM. Everything was 100% within my capabilities to pull off. Easily a few levels below the limits of my knowledge. And I’d already written the start of the code by hand. So when I went to AI I could give it small tasks. Create a React context component, store this in there, and use it in this file. Most of that code is boilerplate.

Poll this API endpoint in this file and populate the context with the result. Only a few lines of code.

Update all API calls to that endpoint with a view into the context.

I can give the AI those steps as a list and go adjust styles on the page to my liking while it works. This isn’t the kind of parallelism I’ve found to be common with LLMs. Often you are stuck on figuring out a solution. In that case AI isn’t much help. But some code is mostly boilerplate. Some is really simple. Just always read through everything it gives you and fix up the issues.

After that sequence of edits I don’t feel any less knowledgeable of the code. I completely comprehend every line and still have the whole app mapped in my head.

Probably the biggest benefit I’ve found is getting over the activation energy of starting something. Sometimes I’d rather polish up AI code than start from a blank file.

reply
layer8
32 minutes ago
[-]
If you’re reviewing the code, it’s not vibe coding. You’re relying on your assessment of the code, not on the “vibes” of the running program.
reply
atleastoptimal
1 hour ago
[-]
HN loves this "le old school" coder "fighting the good fight" speak but it seems sillier and sillier the better and better LLM's get. Maybe in the GPT 4 era this made sense but Gemini 3 and Opus 4.5 are substantively different, and anyone who can extrapolate a few years out sees the writing on the wall.

A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.

reply
AlexandrB
7 minutes ago
[-]
Where are these LLM-authored projects though? I was expecting to see a fresh flood of cheap shovelware in the App Store after LLMs appeared.
reply
hartator
4 hours ago
[-]
The main thing is everyone seems to hate reading someone else ChatGPT while we are still eager to share ours to others as it’s some sort of oracle.
reply
micromacrofoot
1 minute ago
[-]
Personally I use AI for most of my work, launder it a bit to adhere to my own personal style, and don't tell anyone most of the time.

In the end? no one cares. I get just as much done (maybe more), while doing less work. Maybe some of my skills will atrophy, but I'll strengthen others.

I'm still auditing everything for quality as I would my own code before pushing it. At the end of the day, it usually makes fewer typos than I would. It certainly searches the codebase better than I do.

All this hype on both ends will fade away, and the people using the tools they have to get things done will remain.

reply
zkmon
4 hours ago
[-]
So, you want to rebel and stay as organic-minded human? But the what exactly is "being a human"?

The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.

What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?

reply
zero-st4rs
4 hours ago
[-]
> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

perhaps a being that has the capacity for intention and will?

reply
zkmon
3 hours ago
[-]
Capacity for intention and will were already driven by augmentations that were knowledge and reasoning. Knowledge was sourced externally and reasoning was developed from externally recorded memory of past. Even the instincts get updated by experiences and knowledge.
reply
zero-st4rs
2 hours ago
[-]
I'm not sure if you wrote this with AI, but could you provide examples?

Knowledge is shaped by constraints which inform intention, it doesn't "drive it."

"I want to fly, I intend to fly, I learn how to achieve this by making a plane."

not

"I have plane making knowledge therefore I want and intend to fly"

However, I totally understand that constraints often create a feedback loop where reasoning is reduced to the limitations which confine it.

My Mom has no idea that "her computer" != "windows + hp + etc", and if you were to ask her how to use a computer, she would be intellectually confined to a particular ecosystem.

I argue the same is true for capitalism/dominant culture. If you can't "see" the surface of the thing that is shaping your choices, chances are your capacity for "will" is hindered and constrained.

Going back to this.

> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

I don't think my very ability to make choices comes from owning stuff and knowing people.

reply
zkmon
23 minutes ago
[-]
I agree that you are an agent capable of having an intention, but that capability needs inputs from outside. Your knowledge and reasoning doesn't entirely reside inside you. Having ability of intention is like a car engine, waiting for inputs or triggers for action.

And no, I don't need AI for this level of inquiry.

reply
periodjet
26 minutes ago
[-]
The todsacerdoti rule continues to hold.
reply
disambiguation
1 hour ago
[-]
I'm under the impression that AI is still negative ROI. Creating absolute value is different from creating value greater than the cost. A tool is a tool, but could you continue performing professionally if it was suddenly no longer available?
reply
nullbyte808
4 hours ago
[-]
As a crappy programmer I love AI! Right now I'm focusing on building up my Math knowledge, general CS knowledge and ML knowledge. In the future, knowing how to read code and understanding it may be more important than writing it.

I think its amazing what giant vector matrices can do with a little code.

reply
skydhash
4 hours ago
[-]
The thing about reading code and understanding is logical reasoning, which you can do by knowing the semantic of each tokens. But the semantics are not universal. You have the Turing Machine, the lambda calculus, horn clauses, etc… Then there are more abstractions (and new semantics) built on top of those.

Writing code is very easy if you know the solution and the semantics of the coding platform. But knowing the solution is a difficult task, even in a business settings where the difficulty are more communication issues. Knowing the semantics of the coding platform is also a difficult one, because you’ll probably be using others’ code and you’ll face the same communication issue (lack of documentation, erroneous documentation, etc…)

So being good at programming does not really means knowing code. It’s more about knowing how to bypass communication barriers to get the knowledge you need.

reply
cwiz
2 hours ago
[-]
well, maybe adopt an outlook that things you think are real aren't, and just maybe it will work just as fine if you completely ignore them. going forward ignoring ai that are smarter than autocomplete may be just the way to go
reply
nachox999
2 hours ago
[-]
It's up to you to use candles instead of lightbulbs
reply
the__alchemist
1 hour ago
[-]
I am getting tilted by both corp AI hype and the luddites like this. If you don't think that term is appropriate, then I am not sure if it's ever appropriate to use it in the general sense. The "I know you will say you use it appropriately but others don't" pre-emption is something I have seen before, and it isn't convincing.

This article lacks nuance, and could be summarized as "LLMs are bad" Later, I suspect this author (and others of this archetype) will moderate and lament "What I really meant was: I don't like corporations lying about LLMs, or using them maliciously; I didn't imply they don't have uses". The words in the article do not support this.

I believe this pattern is rooted in social-justice-oriented (Is that still the term?) USA left politics. I offer no explanation for this conflation, but an observation.

reply
Separo
4 hours ago
[-]
If as the author suggests AI is inherently designed to further concentrate control and capital, that may be so, but that is also the aim of every business.
reply
64718283661
1 hour ago
[-]
The answer is I will kill myself when I become replaced by LLMs entirely.
reply
noobcoder
4 hours ago
[-]
I see this play out everywhere actually be it code, thoughts, even intent, atomized for the capital engine. Its more than a productivity hack, its a subtle power shift decisions getting abstracted, agency getting diluted

Opting in to weirdness and curiosity is the only bug worth keeping which will eventually become a norm

reply
ciconia
1 hour ago
[-]
Where's the popcorn? I don't really care either way about so-called AI. I find the talk about AGI quite ridiculous, but I can imagine LLMs have their utility just like anything else. I don't vibe code because I don't find it useful. I'm fine coding by myself thank you very much.

When the AI hype is over and the bubble has burst, I'll still be here, writing quality software using my brain and my fingers, and getting paid to do it.

reply
mazone
4 hours ago
[-]
Does the author feel the same way of running the models locally?
reply
arowthway
4 hours ago
[-]
I like the "what’s left" part of the article. It’s applicable regardless of your preferred flavor of resentment about where things are going.
reply
constantcrying
53 minutes ago
[-]
>I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

This is such a bizarre sentiment for any person interested in technology. AI is, without any doubt, the most fascinating and important technology I have seen developed in my lifetime. A decade ago the idea of a computer not only holding a reasonable conversation with a human, but being able to talk with a human on deep and complex subjects seemed far out of reach.

No doubt there are many deep running problems with it, any technology with such a radical breakthrough will have them. But none of that takes away from how monumental of an achievement it is.

Looking down at people for using it or being excited about it is such an extreme position. Also the insinuation that the only reason anybody uses it because they are forced into it, is completely bizarre.

reply
simianwords
4 hours ago
[-]
What's with these kinda people and their obsession with the pejorative "fascist". Overused to the point where it means nothing.
reply
nathias
35 minutes ago
[-]
every important technology produces its own amish
reply
TechDebtDevin
35 minutes ago
[-]
First accurate article with AI in the name I've seen on this site a long time.
reply
geldedus
23 minutes ago
[-]
Cope bettter, Luddite.
reply
Glemkloksdjf
4 hours ago
[-]
Its ignorant. Thats what it is.

The big tech will build out compute in a never seen speed and we will reach 2e29 Flops faster than ever.

Big tech is competing with each other and they are the ones with the real money in our capitalistic world but even if they would find some slow down between each others, countries are also now competing.

In the next 4 years and the massive build out of compute, we will see a lot clearer how the progress will go.

And either we hit obvous limitations or not.

If we will not see an obvious limitation, fionas opinion will have 0 relevance.

The best chance for everyone is to keep a very very close eye on AI to either make the right decisions (not buying that house with a line of credit; creating your own product a lot faster thanks to ai, ...) or be aware what is coming.

Thanks for the fish and enjoy the ride.

reply
JimmaDaRustla
2 hours ago
[-]
We don't care that you don't care
reply
jillesvangurp
1 hour ago
[-]
Harsh but fair. In short, some people are upset about change happening to them. They think it's unfair and that they deserve better. Maybe that's true. But unfair things happen to lots of people all the time. And ultimately people move on, mostly. There's a futility to being very emotional about it.

I don't get all the whining of people about having to adapt. That's a constant in our industry and always has been. If what you were doing was so easy that it fell victim to the first generation of AI tools that are doing a decent enough job of it, then maybe what you were doing was a bit Ground Hog day to begin with. I've certainly been involved with a lot of projects where a lot of the work felt that way. Customer wants a web app thing with a log in flow and a this and a that. 99% of that stuff is kind of very predictable. That's why agentic coding tools are so good at this stuff. But lets be honest, it was kind of low value stuff to begin with. And it's nice that people over-payed for that for a while but it was never going to be forever.

There's still plenty of stuff these tools are less good at. It gets progressively harder if you are integrating lots of different niche things or doing some non standard/non trivial things. And even those things where it does a decent job, it still requires good judgment and expertise to 1) be able to even ask for the right thing and then 2) judge if what comes back is fit for purpose.

There's plenty of work out there supporting companies with decades of legacy software that are not going to be throwing away everything they have overnight. Leveling up their UIs with AI powered features, cross integrating a lot of stuff, etc. is going to generate lots of work and business. And most companies are very poorly equipped to do that in house even if they have access to agentic coding tools.

For me AI is actually generating more work, not less. I'm now taking on bigger things that were previously impossible to take on without involving more people. I have about 10x more things I want to do than I have bandwidth for. I have to take decisions about doing things the stupid old way because it's better/faster or attempting to generate some code. All new tools do is accelerate the pace and raise the ambition levels. That too is nothing new in our industry. Things that were hard are now easy, so we do more of them and find yet harder things to do next. We're not about to run out of hard things to do any time soon.

Adapting is hard. Not everyone will manage. Some people might burn out doing that or change career. And some people are in denial or angry about that. And you can't really expect others to loose a lot of sleep over this. Whether that's unfair or not doesn't really matter.

reply
everdrive
2 hours ago
[-]
>In a world where fascists redefine truth, where surveillance capitalist companies, more powerful than democratically elected leaders, exert control over our desires, do we really want their machines to become part of our thought process? To share our most intimate thoughts and connections with them?

Generally speaking people just cannot really think this way. People broadly are short term thinkers. If something is convenient, people will use it. Is it easier to spray your lawn with pesticides? Yep, cancer (or biome collapse) is a tomorrow problem and we have a "pest" problem today. Is it difficult to sit alone with your thoughts? Well good news, Youtube exists and now you don't have to. What happens next (radicalization, tracking, profiling, propaganda, brain rot) is a tomorrow problem. Do you want to scroll at the end of the day and find out what people are talking about? Well, social media is here for you. Whether or not it's accidentally part of a privatized social credit system? Well again, that 's a problem for later. I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._

I don't see any way out of it. People can't seem to avoid these patterns of behavior. People asking for regulation are about as realistic as people hoping for abstinence. It's a correct answer in principle but just isn't going to happen.

reply
Razengan
1 hour ago
[-]
> I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._

I think that can be offset if you have a strong motivation, a clear goal to look forward to in a reasonable amount of time, to help you endure through the discomfort:

Before I had enough financial independence to be able to travel at will, I was often stuck in a shit ass city, where the most fun to be had was video games and fantasizing about my next vacation coming up in a month or 2, and that helped me a lot in coping with my circumstances.

Too few people are allowed or can afford even this luxury of a pleasant future, a promise of a life different/better than their current.

reply
Razengan
2 hours ago
[-]
> People broadly are short term thinkers.

I wonder how much of that is "nature vs. nurture"?

Like the Tolkienesque elves in fantasy worlds, would humans be more chill too if our natural lifespans were counted in centuries instead of decades?

Or is it the pace of society, our civilization, that always keeps us on edge?

I mean I'm not sure if we're born with a biological sense of mortality, an hourglass of doom encoded into our genes..

What if everybody had 4 days of work per week, guaranteed vacation time every few months, kids didn't have to wake up at 7/8 in the morning every day, and progress was measured biennially, e.g. 2 years between school grades/exams, and economic performance was also reviewed in 2 year periods, and so on, could we as a species mellow the fuck out?

reply
IAmBroom
1 hour ago
[-]
I've wondered about this a lot, and I think it's genetic and optimized for survival in general.

Dogs barely set food aside; they prefer gorging, which is a good survival technique when your food spoils and can be stolen.

Bees, at the other end of the spectrum, spend their lives storing food (or "canning", if you will - storing prepared food).

We first evolved in areas that were storage-adverse (Africa), and more recently many of us moved to areas with winters (both good and needful storage). I think "finish your meal, you might not get one tomorrow" is our baseline survival instinct; "Winter is coming!" is an afterthought, and might be more nurture-based behavior than the other.

reply
Razengan
1 hour ago
[-]
Yes, and it's barely been 100 years, probably closer to 50, since we have had enough technology to make the daily lives of most (or half the) humans in the world comfortable enough that they can safely take 1-2 days off every week.

For the first time in human history most people don't have to worry about famine, wars, disasters, or disease upending their lives; they can just wait it out in their homes.

Will that eventually translate to a more relaxed "instinct"?

reply
justincormack
4 hours ago
[-]
Its interesting how people are still very positive about Marx’s labour theory of value, despite it being very much of its time and very discredited.
reply
shadowgovt
22 minutes ago
[-]
I think the author makes some very good points, but it's perhaps worth noting that they are the current status quo. I do find myself wondering if the author re-evaluates in a future where the technology gets cheaper, the executable AI engine fits on a standalone Raspberry Pi, and retraining the engine is done by volunteer co-ops.

... but, it is definitely worth considering whether the status quo is tolerable and whether we as technical creatives are willing to work with tools that live within it.

reply
eisbaw
3 hours ago
[-]
Programming and CS is the art of solving problems - hopefully problems that matter.

AI lets you do that faster.

AI may suggest a dumb way, so you have to think, and tell it what to do.

My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!

Don't let AI think for you. Do actual intensional arch design.

Programmers that don't know CS who only care about hammering the keyboards because they're artisans have little future.

AI also give me back my hobby after having kids -- time is valuable, and AI is energy efficient.

We are truly living in a cambrian explosion -- lot of slop will be produced, but market and selection pressure will weed those out.

reply
Atlas667
38 minutes ago
[-]
Luddism is a reaction to the current situation as it pertains to labor. Marx had this to say about it:

"It took both time and experience before the workers learned to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used."

- Karl Marx. Das Kapital Vol 1 Ch 15: Machinery and Modern Industry, 1867

Tech can always be good, how its used is what makes it bad, or not.

reply
skwee357
4 hours ago
[-]
Well, there are two aspects from which I can react to this post.

The first aspect is the “I don’t touch AI with a stick”. AI is a tool. Nobody is obligated to touch it obviously, but it is useful in certain situations. So I disagree with the author’a position to avoid using AI. It reads like stubbornness for the sake of avoiding new tech.

The second angle is the “bigtech corporate control” angle. And honestly, I don’t get this argument at all. Computers and the digital world has created the biggest distopian world we have ever witnessed. From absurd amounts of misinformation and propaganda fueled by bot farms operated at government levels, all the way to digital surveillance tech. You have that strong of an opinion against big tech and digital surveillance, blaming AI for that, while enjoying the other perils of big tech, is virtue signaling.

Also, what’s up with the overuse of “fascism” in places where it does not belong?

reply
dangus
4 hours ago
[-]
This piece started relatively well but devolved by the end.

Is AI resource-intensive by design? That doesn’t make any sense to me. I think companies are furiously working toward reducing AI costs.

Is AI a tool of fascism? Well, I’d say anything that can make money can be a tool of fascism.

I can sort of jive with the argument that AI is/will be reinforcing the ideals of those in power, although I think traditional media and the tooling that AI intends to replace like search engines accomplished that just fine.

What we are left with is, I think, an author who is in denial about their special snowflake status as a programmer. It was okay for the factory worker to be automated away, but now that it’s my turn to be automated away I’m crying fascism and ethics.

Their friends behave the way they do about AI because they know it’s useful but know it’s unpopular. They’re trying to save face while still using the tool because it’s so obviously useful and beneficial.

I think the analogy is similar to the move from film to digital. There will be a tiny amount of people who never buy in, there will be these “ashamed” adopters who support the idea of film and hope it continues on, but for themselves personally would never go back to film, and then the majority who don’t see the problem with letting film die.

reply
aforwardslash
5 hours ago
[-]
Everytime I read one of these "I don't use AI" posts, the content is either "my code is handcrafted in a mountain spring and blessed by the universe itself, so no AI can match it", or "everything different from what I do is technofascism or <insert politics rant here>". Maybe Im missing something, but tech is controlled by a handful of companies - always have been; and sometimes code is just code, and AI is just a tool. What am I missing?
reply
ryanjshaw
4 hours ago
[-]
I was embarrassed recently to realize that almost all the code I create these days is written by AIs. Then I realized that’s OK. It’s a tool, and I’m making effective use of it. My job was to solve problems, not to write code.

I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.

This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.

I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.

I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!

AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.

I will, however, get worried when AIs start running businesses. Then we are in trouble.

reply
fragmede
4 hours ago
[-]
Anthropic ran a vending machine business as an experiment, but I don't imagine someone out there isn't already seriously running one in production.
reply
ryanjshaw
2 hours ago
[-]
I’ve been tempted to define my life in a big prompt and then do something like: it’s 6:05. Ryan has just woke up. What action (10min or less) does he take? I wonder where I’ll end up if I follow it to a T.
reply
xg15
4 hours ago
[-]
> Maybe Im missing something, but tech is controlled by a handful of companies - always have been;

The entire open source movement would like a word with you.

reply
IAmBroom
1 hour ago
[-]
So would disruptive young Mr. Gates.
reply
acuozzo
1 hour ago
[-]
? Maybe Im missing something, but tech is controlled by a handful of companies - always have been

I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.

It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.

reply
owenthejumper
4 hours ago
[-]
You are not missing much. Yes there will be situations where AI won’t be helpful, but that’s not a majority

Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.

Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’

reply
technothrasher
4 hours ago
[-]
> What am I missing?

The youthful desire to rage against the machine?

reply
irilesscent
4 hours ago
[-]
I prefer eternally enslaving a machine to do my bidding over just raging at them.
reply
megous
4 hours ago
[-]
Not much. Even the argument that AI is another tool to strip people of power is not that great.

It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.

AI tools can be used to help against the systems of power.

reply
andrepd
4 hours ago
[-]
Yes, the black box that has been RLHF'd in god knows what way is surely going to help you gain power, and not its owners...
reply
nullbyte808
5 hours ago
[-]
Exactly.
reply
stuartjohnson12
4 hours ago
[-]
There's a lot of overlap between "AI is evil megacapitalism" and "AI is ineffective", and I never understood the latter, but I am increasingly arriving to the understanding that the latter claim isn't real, it's just a soldier in the war being fought over the former.
reply
trashb
1 hour ago
[-]
reply
stuartjohnson12
27 minutes ago
[-]
Open source library development has to follow very tight sets of style adherence because of its extremely distributed nature, and the degree to which feature development is as much the design of new standards as it is writing working code. I would imagine that it is perhaps the kind of programming least well suited to AI assistance.

AI speeds me up a tremendous amount in my day job as a product engineer.

reply
zero-st4rs
4 hours ago
[-]
I read the intersection as this:

We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.

So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.

I think we need more "manual" choice, and more agency.

reply
andrepd
4 hours ago
[-]
Ineffective at what? Writing good code, or producing any sort of valuable insight? Yes, it's ineffective. Writing unmaintainable slop at line rate? Or writing internet-filling spam, or propagating their owners' points of view? Very effective.

I just think the things they are effective at are a net negative for most of us.

reply
mono442
2 hours ago
[-]
The author may not care but I doubt people care that a software has been developed by AI instead of a human. Just like nobody cares whenever a hole has been dug by hand using a shovel or by an excavator.
reply
notTooFarGone
31 minutes ago
[-]
people care if there is noone able to fix the software or adjust it.

Think of old SAP systems with a million obscure customization - any medium to large codebase that is mostly vibe coded is instantly legacy code.

In your hole analogy: People don't care if a mine is dug by a bot or planned by humans until there is structural integrity issues or tunnels that are collapsing and nobody is able to read the map properly.

reply
cpburns2009
2 hours ago
[-]
The problem is the AI excavator destroying the road because it hallucinated the ditch.
reply
EdiX
4 hours ago
[-]
I don't think I'm going to take seriously an argument that uses Marx as its foundation but I'm glad that the pronouns crowd has had to move on from finger wagging as their only rhetorical stance.
reply
mahrain
4 hours ago
[-]
Reading the blog post the Marxist sentiment was creeping in, and then I also saw actual Marx referenced in the footnotes.
reply
bgwalter
2 hours ago
[-]
The increasingly rough tone against "AI" critics in the comments and the preposterous talking points ("you are not a senior developer if you do not get value from 'AI'") is an indication that the bubble will burst soon.

It is the tool obsessed people who treat everything like a computer game that like "AI" for software engineering. Most of them have never written anything substantial themselves and only know the Jira workflow for small and insignificant tickets.

reply
raincole
4 hours ago
[-]
> LLM brainworm is able to eat itself even into progressive hacker circles

What a loaded sentence lol. Implying being a hacker has some correlation with being progressive. And implying somehow anti-AI is progressive.

> AI systems being egregiously resource intensive is not a side effect — it’s the point.

Really? So we're not going to see AI users celebrating over how much less power DeepSeek used, right?

Anyway guess what else is resource intensive? Making chips. Follow the line of logic you will find computers consolidate powers and real progressive hackers should use pencil and paper only.

Back to the first paragraph...

> almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.

The irony is off the roof. This article is essentially: when I use computational power how I like, it's being a hacker. When others use computational power their way, it's being fascists.

reply
_heimdall
4 hours ago
[-]
> Implying being a hacker has some correlation with being progressive

I didn't read it that way. "Progressive hacker circles" doesn't imply that all hackers are progressive, it can just be distinguishing progressive circles from conservative ones.

reply
embedding-shape
4 hours ago
[-]
> Implying being a hacker has some correlation with being progressive

I mean, yeah, that kind of checks out. The quoted part doesn't make much sense to me, but that most hackers are progressives (as in "enact progress by change", not the twisted American version) should hardly come as a surprise. The opposite would be that a hacker could be a conservative (again, not the US version, but the global definition; "reluctant to change"), which is pretty much a oxymoron. Best would be to eschew political/ideological labels in total, and just say we hackers are unpolitical :)

reply
ToucanLoucan
4 hours ago
[-]
Pro/regressive are terms that are highly contextual. Progress for progress’ sake alone can move anything forward. I would argue the progression of the attention economy has been extremely negative for most of the human race, yet that is “progressing.”
reply
zmgsabst
4 hours ago
[-]
In this instance, it’s just claiming turf for the political movement in the US that has spent the last century:

- inventing scientific racism and (after that was debunked) reinventing other academic pretenses to institutionalize race-base governance and society

- forcibly sterilizing people with mental illnesses until the 1970s, through 2005 via coercion, and until the present via lies, fake studies, and ideological subversion

- being outspokenly antisemitic

Personally, I think it’s a moral failing we allow such vile people to pontificate about virtues without being booed out of the room.

reply
Mashimo
4 hours ago
[-]
The typical CCC / Hackerspace - circle is kinda progressive / left leaning. At least in my experience. Which I think she(or he?) was implying. Of course not every hacker is :)
reply
metalman
3 hours ago
[-]
"chat~fu"

cachonk!

snap your cuffs, wait fot it eyebrows!

and demonstrate your mastery ,to the muterings of the golly gee's

it will last several more months untill the , GASP!!!, bills ,maintenance costs, regulatory burdens, and various legal issues combine to, pop AI's balloon, where then AI will be left automating all of the tedious, but chair filling, beurocratic/secretarial/appretice positions through out the white collar world. technology is slowly pushing into other sectors, where legacy methods and equipment can now be reduced to a free app on a phone, more to the point, a free, local only app. fact is that we are way over siliconed going forward and that will bite as well, terra bite phones for $100, what then?

reply
ttul
1 hour ago
[-]
This post raises genuine concerns about the integration of large language models into creative and technical work, and the author writes with evident passion about what they perceive as a threat to human autonomy and craft. BUT… the piece suffers from internal contradictions, selective reasoning, and rhetorical moves that undermine its own arguments in ways worth examining carefully.

My opinion: This sort of low-evidence writing is all too common in tech circles. It makes me wish computer science and engineering majors were forced to spend at least one semester doing nothing but the arts.

The most striking inconsistency emerges in how the author frames the people who use LLM tools. Early in the piece, colleagues experimenting with AI coding assistants are described in the language of addiction and pathology: they are “sucked into the belly of the vibecoding grind,” experiencing “existential crisis,” engaged in “harmful coping.” The comparison to watching a friend develop a drinking problem is explicit and damning. This framing treats AI adoption as a personal failure, a weakness of character, a moral lapse. Yet only paragraphs later, the author pivots to acknowledging that people are “forced to use these systems” by bosses, UI patterns, peer pressure, and structural disadvantages in school and work. They even note their own privilege in being able to abstain. These two framings cannot coexist coherently. If using AI tools is coerced by material circumstances and power structures, then the addiction metaphor is not just inapt but cruel — it assigns individual blame for systemic conditions. The author wants to have it both ways: to morally condemn users while also absolving them as victims of circumstance.

This tension extends to the author’s treatment of their own social position. Having acknowledged that abstention from LLMs requires privilege, they nonetheless continue to describe AI adoption as a “brainworm” that has infected even “progressive hacker circles.” The disgust is palpable. But if avoiding these tools is a luxury, then expressing contempt for those who cannot afford that luxury is inconsistent at best and self-congratulatory at worst. The acknowledgment of privilege becomes a ritual disclaimer rather than something that actually modifies the moral judgments being rendered.

The author’s claims about intentionality represent another significant weakness. The assertion that AI systems being resource-intensive “is not a side effect — it’s the point” is presented as revelation, but it functions as an unfalsifiable claim. No evidence is offered that anyone designed these systems to be resource-hungry as a mechanism of control. The technical requirements of training large models, competitive market pressure to scale, and the emergent dynamics of venture capital investment all offer more parsimonious explanations that don’t require attributing coordinated malicious intent. Similarly, the claim that “AI systems exist to reinforce and strengthen existing structures of power and violence” is stated as though it were established fact rather than contested interpretation. This is the central claim of the piece, and yet it receives no argument — it is simply asserted and then built upon, which amounts to begging the question.

The essay also suffers from a pronounced selection bias in its examples. Every person described using AI tools is in crisis, suffering, or compromised. No one uses them mundanely, critically, or with benefit. This creates a distorted picture that serves rhetorical purposes but does not reflect the range of actual use cases. The author’s friends who share their anti-AI sentiment are mentioned approvingly, establishing clear in-group and out-group boundaries. This is identity formation masquerading as analysis — good people resist, compromised people succumb.

There is a false dichotomy running through the piece that deserves attention. The implied choice is between the author’s total abstention, not touching LLMs “with a stick,” and being consumed by the pathological grind described earlier. No middle ground exists in this telling. The possibility of critical, limited, or thoughtful engagement with these tools is never acknowledged as legitimate. You are either pure or contaminated.

Reality doesn’t work this way! It’s not black and white. My take: AI is a transformative technology and the spectrum of uses and misuses of AI is vast and growing.

The philosophical core of their argument also contains an unexamined equivocation. The author invokes the extended cognition thesis — the idea that tools become part of us and shape who we are — to make AI seem uniquely threatening. But this same argument applies to every tool mentioned in the piece: hammers, pens, keyboards, dictionaries. The author describes their own fingers “flying over the keyboard, switching windows, opening notes, looking up words in a dictionary” as part of their extended cognitive process. If consulting a dictionary shapes thought and becomes part of our cognitive process, what exactly distinguishes that from asking a language model to check grammar or suggest a word? The author never establishes what makes AI categorically different from the other tools that have already become part of us. The danger is assumed rather than demonstrated.

There is also a genetic fallacy at work in the argument about power. The author suggests AI is bad partly because of who controls it — surveillance capitalists, fascists, those with enormous physical infrastructure. But this argument conflates the origin and ownership of a technology with its inherent properties. One could make identical arguments about the printing press, the telephone, or the internet itself. The question of whether these tools could be structured differently, owned differently, or used toward different ends is never engaged. Everything becomes evidence of a monolithic system of control.

Finally, there is an unacknowledged irony in the piece’s medium and advice. The author recommends spending less time on social media and reading books instead, while writing a blog post clearly designed for social sharing, complete with the vivid metaphors, escalating moral stakes, and calls to action that characterize viral content. The post exists within and depends upon the very attention economy it criticizes. This is not necessarily hypocrisy — we all must operate within systems we find problematic — but the lack of self-awareness about it is notable given how readily the author judges others for their compromises.

The essay is most compelling when it stays concrete: the phenomenology of writing as discovery, the real pressures workers face, the genuine concerns about who controls these systems and toward what ends. It is weakest when it reaches for grand unified theories of intentional domination, when it mistakes assertion for argument, and when it allows moral contempt to override the structural analysis it claims to offer. The author clearly cares about human flourishing and autonomy, but the piece would be stronger if that care extended more generously to those navigating these technologies without the privilege of refusal.

reply
joestrouth1
12 minutes ago
[-]
Your reading of the addiction angle is much different than mine.

I didn't hear the author criticizing the character of their colleagues. On the contrary, they wrote a whole section on how folks are pressured or forced to use AI tools. That pressure (and fear of being left behind) drives repeated/excessive exposure. That in turn manifests as dependence and progressive atrophy of the skills they once had. Their colleagues seem aware of this as evidenced by "what followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine". When you're dependent on something, you can always find a 'reason'/excuse to use. AA and other programs talk about this at length without morally condemning addicts or assigning individual blame.

> For most of us, self-justification was the maker of excuses; excuses, of course, for drinking, and for all kinds of crazy and damaging conduct. We had made the invention of alibis a fine art. [...] We had to drink because at work we were great successes or dismal failures. We had to drink because our nation had won a war or lost a peace. And so it went, ad infinitum. We thought "conditions" drove us to drink, and when we tried to correct these conditions and found that we couldn't to our entire satisfaction, our drinking went out of hand

Framing something as addictive does not necessarily mean that those suffering from it are failures/weak/immoral but you seem to have projected that onto the author.

Their other analogy ("brainworm") is similar. Something that no-one would willingly sign up for if presented with all the facts up front but that slips in and slowly develops into a serious issue. Faced with mounting evidence of the problem, folks have a strong incentive to downplay the issue because it's cognitively uncomfortable and demands action. That's where the "harmful coping" comes in: minimizing the severity of the problem, avoiding the topic when possible, telling yourself or others stories about how you're in control or things will work out fine, etc.

reply
kleiba
1 hour ago
[-]
Damn, I knew I shouldn't have read on after "falafel sandwich"...
reply
ojr
2 hours ago
[-]
I always thought years of experience in a language was a silly job requirement. LLMs allow me to write Rust code as a total Rust beginner and allows me to create a valuable SaaS while most experienced Rust developer never built anything that made $1 outside of their work. I wouldn't say devaluation, my programming experience definitely helps with debugging. LLMs eliminate boilerplate, not engineering judgement and product decisions.
reply
drudolph914
1 hour ago
[-]
I think when the author says

> “We programmers are currently living through the devaluation of our craft”

my interpretation of what the author means by devaluation is the general trend that we’re seeing in LLMs

The theory that I hear from investors is as LLMs generally improve, there will exist a day where a LLMs default code output, coupled with continued hardware speeds, will become _good enough_ for the majority of companies - even if the code looks like crap and is 100x slower than it needs to be

This doesn’t mean there won’t be a few companies that still need SWEs to drop down and do engineering, but tbh, the majority of companies today just need a basic web app - and we’ve commoditized web app dev tools to oblivion. I’d even go as far to argue that what most programmers do today isn’t engineering, it’s gluing together an ecosystem of tooling and or API’s.

Real engineering seems to happen outside of work on open source projects, at the mav 7 on specialized teams, or at niche deeply technical startups

EDIT: I’m not saying this is good or bad, but I’m just making the observation that there is a trend towards devaluing this work in the economy for the majority of people, and I generally empathize with people who just want stability and to raise a family within reasonable means

reply
mountainriver
2 hours ago
[-]
I really love LLMs for Rust. Before them I was an intermediate Rust dev, and only used it in specific circumstances where the extra coding overhead paid off.

Now I write just about everything in Rust because why not? If I can vibe code Rust about as fast as Python, why would I ever use Python outside of ML?

reply