This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.
There's nothing stopping you from appreciating heartfelt effort. That is still possible today.
To be clear, I don't like automated low effort stuff like this. But I hate the internet's psychosis-like reaction to AI more.
It's become a vile performative meme to state how much you hate AI. The tone is always one of bravery and sacrifice mixed with disgust.
You know how you can tell someone hates AI? They'll tell you fifty times. It's becoming a personality type.
Not sure if I’m making sense, but that’s how I’d feel about it.
edit: changed "ad hominem" to "performative rhetoric", think its more fitting in this case but it all seems borderline
This is such a bizarre trend that seems to have gotten much worse recently. I don't know if it's dropping empathy levels or rising self-importance, but many people now find the idea of someone genuinely disagreeing as a completely foreign idea. Instead of meeting a different viewpoint with some variation of "agree to disagree" many more people now seem to jump to "you actually agree with me, you're just pretending otherwise".
Non-tongue-in-cheek discussion of the Mandela Effect is a parallel phenomenon. "My memory can't possibly be wrong, this is evidence of our understanding of physics being wrong!"
Just a couple small things that make me worry about the future of society in the midst of a discussion about one huge thing that makes me worry about the future of society in AI.
Tell me again about performative rage.
I’m sad Rob got so upset. I understand why, but no one wants technodystopia.
What some people see as technoutopia, others see as technodystopia. In other words, some people do want your version of technodystopia, they just don’t call it that themselves.
I agree this outcome is very painful to see and I really feel for Rob. It's clear people (myself included) are completely at breaking point with AI slop.
In this specific case though it's worth spending 30sec to read the website of AI model village to understand the experiment before claiming this was sent by Anthropic or assigning malicious intent.
AKA "communist in the streets, capitalist in the sheets".
And to set Claude as the From header despite it not coming from Anthropic. Very odd.
Some commenters suggest that Pike is being hypocritical, having long worked for GOOG, one of the main US corporations that is enshittifying the Internet and profligately burning energy to foist rubbish on Internet users.
One could rightly suggest that a vapid e-mail message crafted by a machine or by an insincere source is similar to the greeting-card industry of yore, and we don't need more fake blather and partisan absurdity supplanting public discourse in democratic society.
The people who worry about climate-change and the environment may have been out-maneuvered by transnational petroleum lobbies, but the concern about burning coal, petroleum, and nuclear fuel to keep pumping the commercial-surveillance advertising industry and the economic bubble of AI is nonetheless a valid concern.
Pike has been an influential thinker and significant contributor to the software industry.
All the above can be true simultaneously.
For me, the dislike comes from the first part of the message. All of a sudden people who never gave a single shit about the environment, and still make zero lifestyle changes (besides "not using AI") for it, claim to massively care. It's all hypocritical bullshit by people who are scared of losing their jobs or of the societal damage. Which there is a risk of, definitely! So go talk about that. Not about the water usage while munching on your beef burger which took 2100 litres of water to produce. It's laughable.
Now I don't know Rob Pike. Maybe he's vegetarian, barely flies, and buys his devices second-hand. Maybe. He'd be the very first person clamouring about the environmental effects of AI I've seen who does so. The people I know who actually do care about the environment and so have made such lifestyle changes, don't focus much about AI's effects in particular.
> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
So yeah, if you haven't already been doing the above things for a long time, fuck you Rob Pike, for this performative bullshit.
If you have, then sorry Rob, you're a guy of your word.
Interesting to see that people are a huge fan of Rob saying those things, but not of me saying this, looking at the downvotes.
If anything, I'm glad people are finally starting to wake up to this fact.
Any tool can be used by a wrongdoer for evil. Corporations will manipulate the regulator in order to rent seek using whatever happens to be available to them. That doesn't make the tools themselves evil.
This has been empirically disproven. China experimented with having no enforced Intellectual Property laws, and the result was that they were able to do the same technological advancement it took the West 250 years to do and surpass them in four decades.
Intellectual Property law is literally a 6x slowdown for technology.
I'm no fan of the current state of things but it's absurd to imply that the existence of IP law in some form isn't essential if you want corporations to continue much of their R&D as it currently exists.
Without copyright in at least some limited form how do you expect authors to make a living? Will you have the state fund them directly? Do you propose going back to a patronage system in the hopes that a rich client just so happens to fund something that you also enjoy? Something else?
That argument was in vogue about 20 years ago, but it fell out of favor when China passed us on the most important technologies without slowing down.
It is funny that some people are still carrying the torch for it after it's been so clearly disproven.
But surely you can see how your upthread math of “250 years in 40 years” has a mix of mostly catch-up and replication and a sliver of novel innovation at the extreme tail end of that 250 year span?
How is that any different from hoping that a corporate conglomerate happens to fund something i also enjoy?
Of course, the kind of investments that might succeed and pay for themselves may not necessarily be the kind that is most beneficial to the public at large - but the same applies to the patron.
So obvious what a fucking farce this all is and it's time we start demanding better.
Naturally that could never have been legitimate until the patent on the Zippo had expired ;)
Those 249 years of tech were based on the previous 249 years of tech, and so on and so on. That is how it works. Nothing you have "today" comes from a vacuum.
I'd rather we don't encourage "monetarily favorable" intellectual endeavors...
It's weird to lump ever other possible idea in one category. These are complex issues with ever changing contexts. The surface of the problem is huge! Surely with anything else we wouldn't be so tunnel visioned, we wouldn't just say: "well we simply _must_ discount everything else, so we can only be happy with what we got." It would literally sound absurd in any other context, but because we are trained to politicize thinking outside of market mechanisms, we see very smart people saying ridiculous things!
Sometimes people do talk about alternatives. State funding and patronage are two of the most common. Both have very obvious drawbacks in terms of quantity and who gets influence over the outcome. Both also have interesting advantages that are well worth examining.
The second it became cheaper to not apply it, every state under the sun chose not to apply it. Whether we're talking about Chinese imports that absolutely do not respect copyright, trademark, even quality, health and warranty laws ... and nothing was done. Then, large scale use of copyrighted by Search provider (even pre-Google), Social Networks, and others nothing was done. Then, large scale use for making AI products (because these AI just wouldn't work without free access to all copyrighted info). And, of course, they don't put in any effort. Checking imports for fakes? Nope. Even checking imports for improperly produced medications is extremely rarely done. If you find your copyright violated on a large scale on Amazon, your recourse effectively is to first go beg Amazon for information on sellers (which they have a strong incentive not to provide) and then go run international court cases, which is very hard, very expensive, and in many cases (China, India) totally unfair. If you get poisoned from a pill your national insurance bought from India, they consider themselves not responsible.
Of course, this makes "competition" effectively a tax-dodging competition over time. And the fault for that lies entirely with the choice of your own government.
Your statement about incorrect application only makes sense if "regulatory regimes" aren't really just people. Go visit your government offices, you'll find they're full of people. People who purposefully made a choice in this matter.
A choice to enforce laws against small entities they can easily bully, and to not do it on a larger scale.
To add insult to injury, you will find these choices were almost never made by parliaments, but in international treaties and larger organizations like the WTO, or executive powers of large trade blocks.
I am convinced most people never had or ever will have this choice actively. Considering pillarisation (this is not a misspelling) was already a thing in most political systems well before the advent of mass media and digital media it only got worse with it, effectively making choices for people, into the effective hands of few people, influenced by even less people. Those people in the government you mention do not make the choices, they have to act on them as I read it.
> A choice to enforce laws against small entities they can easily bully, and to not do it on a larger scale.
That's a systemic issue, AKA the bad regulatory regime that I previously spoke of. That isn't some inherent fault of the tool. It's a fault of the regulatory regime which applies that tool.
Kitchen knives are absolutely essential for cooking but they can also be used to stab people. If someone claimed that knives were inherently tools of evil and that people needed to wake up to this fact, would you not consider that rather unhinged?
> To add insult to injury, you will find these choices were almost never made by parliaments, but in international treaties and larger organizations like the WTO, or executive powers of large trade blocks.
That's true, and it's a problem, but it (again) has nothing to do with the inherent value of IP as a concept. It isn't even directly related to the merits of the current IP regulatory regime. It's a systemic problem with the lawmaking process as a whole. Solve the systemic problem and you can solve the downstream issues that resulted from it. Don't solve it and the symptoms will persist. You're barking up the wrong tree.
The web is for public use. If you don’t want the public, which includes AI, to use it, don’t put it there.
The way that Rob's opinion here is deflected, first by focusing on the fact that he got a spam mail and then this misleading quote ("myself" does not refer to Rob) is very sad.
The spam mail just triggered Rob's opinion (the one that normal people are interested in).
I think you have an overinflated notion of what "normal people" care about
Despite the apparent etymological contrast, “copyright” is neither antithetical to nor exclusive with “copyleft”: IP ownership, a degree of control over own creation’s future, is a precondition for copyleft (and the OSS ecosystem it birthed) to exist in the first place.
Does it though?
I know that people who like intellectual property and money say it does, but people who like innovation and creativity usually tend to think otherwise.
3D printers are a great example of something where IP prevented all innovation and creativity, and once the patent expired the innovation and creativity we've enjoyed in the space the last 15 years could begin.
Yes. The alternative is that everyone spams the most popular brands instead of making their own creations. Both can be abused, but I see more good here than in the alternative.
Mind you, this is mostly for creative IP. We can definitely argue for technical patents being a different case.
>but people who like innovation and creativity usually tend to think otherwise.
People who like innovation and creativity still might need to commission or sell fan art to make ends meet. That's already a gray area for IP.
I think that's why this argument always rubs me strangely. In a post scarcity world, sure. People can do and remix and innovate as they want. We're not only not there, but rapidly collapsing back to serfdom with the current trajectory. Creativity doesn't flourish when you need to spend your waking life making the elite richer.
This is a strange inversion. Property ownership is morally just in that the piece of land my home is can only be exclusive, not to mention necessary to a decent life. Meanwhile, intellectual property is a contrivance that was invented to promote creativity, but is subverted in ways that we're only now beginning to discover. Abolish copyright.
That mentality is exactly why you can argue property ownership being more evil. Landlords "own property" and see the reputation of that these past few decades.
Allowing private ownership of limited human necessities like land leads to greed that cost people lives. That's why heavy regulation is needed. Meanwhile, it's at worst annoying and stifling when Disney owns a cartoon mouse fotlr 100 years.
The absolute delusion.
Not Pike.
Many countries base some of their laws on well accepted moral rules to make it easier to apply them (it's easier to enforce something the majority of the people want enforced), but the vast majority of the laws were always made (and maintained) to benefit the ruling class
Also I disagree with the context of what the purpose is for law. I don't think its just about making it easier to apply laws because people see things in moralistic ways. Pure Law, which came from the existence of Common Law (which relates to whats common to people) existed within the frame work of whats moral. There are certain things, which all humans know at some level are morally right or wrong regardless of what modernity teaches us. Common laws were built up around that framework. There is administrative law, which is different and what I think you are talking about.
IMHO, there is something moral that can be learned from trying to convince people that IP is moral, when it is, in fact, just a way to administrate people into thinking that IP is valid.
If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
What a stupid, selfish and childish thing to do.
This technology is going to change the world, but people need to accept its limitations
Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.
LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency
I hope the world survives this craziness!
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
No, they don't.
There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.
> We use this kind of language as a shorthand because ...
You, not we. You're using the language of snake oil salesman because they've made it commonplace.
When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.
Its fucking insanity.
To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.
What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.
> Everybody knows LLMs are not alive and don't think, feel, want.
What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"
Can't you see what a fucking LIE this is?
> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky
Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.
People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.
> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.
Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?
Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.
Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?
This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.
And to think they dont even have ad-driven business models yet
Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"
To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"
"Think of how stupid the average person is, and realize half of them are stupider than that."
That's not how Carlin's quote goes.
You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.
You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.
JFC this makes me want to vomit
These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.
> while maintaining perfect awareness
"awareness" my ass.
Awful.
No different than an CEO telling his secretary to send an anniversary gift to his wife.
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
Then again, they are actors. It might have started as ad-libbed, but entirely possible it had multiple takes still to get it "just right".
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
Welcome to 2025.
There's this old joke about two economists walking through the forest...
> They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.
No time to waste on pesky human interactions, AI is better than you to get engagement.
Get back to work.
You care enough to do something, but have other time priorities.
I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.
For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.
Or to write it crudely- with errors and naivete, bursting with emotion and letting whatever it is inside you to flow on paper, like kids do. It's okay too.
Or to painstakingly work on the letter, stumbling and rewriting and reading, and then rewriting again and again until what you read matches how you feel.
Most people are very forgiving of poor writing skills when facing something sincere. Instead of suffering through some shallow word soup that could have been a mediocre press release, a reader will see a soul behind the stream ot utf-8
I think the "you should painstakingly work on my thank you letter" is a bit of a rude ask / expectation.
Some folks struggle with wordsmithing and want to get better.
Having a machine lie to people that it is "deeply grateful" (it's a word-generating machine, it's not capable of gratitude) is a lot more insulting than using whatever writing skills a human might possess.
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
I hope that makes you feel good.
Fascinating topic. However, my argument works for compartimentalized discussions as well. Conscious or not, it's meaningless crap.
(by the way, I love the idea of AI! Just don't like what they did with it)
> hopefully saying something good about
Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.
You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."
Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.
No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.
I’m all for using technology for accessibility. But this kind of whataboutism is pure nonsense.
Read the article again. Rob Pike got a letter from a machine saying it is "deeply grateful". There's no human there expressing anything, worse, it's a machine gaslighting the recipient.
If a family member used LLM to write a letter to another, then at least the recipient can believe the sender feels the gratefulness in his/her human soul. If they used LLM to write a message in their own language, they would've proofread it to see if they agree with the sentiment, and "take ownership" of the message. If they used LLM to write a message in a foreign language, there's a sender there with a feeling, and a trust of the technology to translate the message to a language they don't know in the hopes that the technology does it correctly.
If it turns out the sender just told a machine to send their friends each a copy-pasted message, the sender is a lazy shallow asshole, but there's still in their heart an attempt of brightening someone's day, however lazily executed...
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
I already said in other comments that the OP was a different situation.
If you send me a photo of the moon supposedly taken with your smartphone but enhanced by the photo app to show all the details of the moon, I know you aren't sincere and sending me random slop. Same if you are sending me words you cannot articulate.
The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.
There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.
You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.
Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.
If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
This is pretty far off from the original thread though. I appreciate your less abrasive response.
While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.
Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be
While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.
I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
> I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
Strong agree here.
But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.
(Or maybe we will just stop understanding many things deeply...)
I agree that struggle matters. I don’t think deep understanding comes without effort.
My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.
Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.
I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:
> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
> I agree just telling an AI 'write my thank you letter for me' is pretty shitty
Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?
So Im sorry but much of it is being abused and the parts of it being abused needs to stop.
Ideas I often hear usually assume it is easy to discern AI content from human, which is wrong, especially at scale. Either that, or they involve some form of extreme censorship.
Microtransactions might work by making it expensive run bots while costing human users very little. I'm not sure this is practical either though, and has plenty of downsides as well.
Corporations operate by charters, granted by society to operate in a limited fashion, for the betterment of society. If that's not happening, corporations don't have a right to exist.
You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.
(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)
Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.
> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.
This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.
I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.
LLM output is inherently an expression of the work of other people (irrespective of what training data, weights, prompts it is fed). Essentially by using one you're co-authoring with other (heretofore uncredited) collaborators.
1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."
2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.
I mean how do you write this seriously?
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
They have this blog post up detailing how the LLMs they let loose were spamming NGOs with emails: https://theaidigest.org/village/blog/what-do-we-tell-the-hum...
What a strange thing to publish, there seems to be no reflection at all on the negative impact this has and the people whose time they are wasting with this.
Hi Ken Thompson! You are now subscribed to CAT FACTS! Did you know your cat does not concatenate cats, files, or time — it merely reveals them, like a Zen koan with STDOUT?
You replied STOP. cat interpreted this as input and echoed it back.
You replied ^D. cat received EOF, nodded politely, exited cleanly, and freed the terminal.
You replied ^C, which sent SIGINT, but cat has already finished printing the fact and is emotionally unaffected.
You replied ^Z. cat is now stopped, but not gone. It is waiting.
You tried kill -9 cat. The signal was delivered. Another cat appeared.
https://theaidigest.org/village/agent/claude-opus-4-5
At least it keeps track
The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted
Why do AI companies seem to think that the best place for AI is replacing genuine and joyful human interaction. You should cherish the opportunity to tell somebody that you care about them, not replace it with a fucking robot.
Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?
I had a look to try and understand who can be that far out, all I could find is https://theaidigest.in/about/
Please can some human behind this LLMadness speak up and explain what the hell they were thinking?
> while Claude Opus spent 22 sessions trying to click "send" on a single email, and Gemini 2.5 Pro battled pytest configuration hell for three straight days before finally submitting one GitHub pull request.
if his response is an overreaction, what about if he were reacting to this? it's sort of the same thing, so IMO it's not an overreaction at all.
And here I thought it'd be a great fit for LinkedIn...
https://theaidigest.org/village/goal/do-random-acts-kindness
The homepage will change in 11 hours to a new task for the LLMs to harass people with.
Posted timestamped examples of the spam here:
Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe
Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!
"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"
whoever runs this shit seems to think very little of other people time.
It went well, right?
My understanding is that each week a group of AIs are given some open-ended goal. The goal for this week: https://theaidigest.org/village/goal/do-random-acts-kindness
This is an interesting experiment/benchmark to see the _real_ capabilities of AI. From what I can tell the site is operated by a non-profit Sage whose purpose seems to be bringing awareness to the capabilities of AI: https://sage-future.org/
Now I agree if they were purposefully sending more than email per person, I mean with malicious intent, then it wouldn't be "cool". But that's not really the case.
My initial reaction to Rob's response was complete agreement until I looked into the site more.
There are strong ethical rules around including humans in experiments, and adding a 60+ year old programming language designer as unwitting test subject does not pass muster.
Also this experiment is —please tell me if I'm wrong— not nowhere near curing cancer right?
I don't expect an answer: "You're absolutely right" is taken as a given here sorry.
Its not art, so then it must ass value to be "cool", no?
Is it entertainment? Like ding dong ditching is entertainment?
The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.
Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.
Seriously though, it ignores that words of kindness need a entity that can actually feel expressing them. Automating words of kindness is shallow as the words meaning comes from the sender's feelings.
…kind of IS setting a bunch of Markov Chaneys loose on each other, and that's pretty much it. We've just never had Chaneys this complicated before. People are watching the sparks, eating popcorn, rooting for MechaHitler.
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384
Seems the event was true, if nothing else.
EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
(for the record, the downvoters are the same people who would say this to someone who linked a twitter post, they just don't realize that)
I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.
>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.
I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.
The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.
Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:
>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.
The setting is mostly cosmetic and only affects the Bluesky official app and web interface. People do find this setting helpful for curbing external waves of harassment (less motivated people just won't bother making an account), but the data is public and is available on the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
So nothing is stopping LLMs from training on that data per se.
Twitter/X at least allows you to read a single post.
I can see it using this site:
No.
The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
I just don't understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.
edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.
And yes, you can still inspect the post itself over the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
(You won't be able to read replies, or browse to the user's post feed, but you can at least see individual tweets. I still wrap links with s/x/fxtwitter/ though since it tends to be a better preview in e.g. discord.)
For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.
I'll (genuinely happily) change my opinion on this when it's possible to do twitter-like microblogging via ATproto without needing any infra from bluesky tye company. I hear there are independent implementations being built, so hopefully that will be soon.
If I write a hash table implementation in C, am I plagiarizing? I did not come up with the algortithm nor the language used for implementation; I "borrowed" ideas from existing knowledge.
Lets say I implemented it after learning the algorithm from GPL code; is ky implementation a new one, or is it derivative?
What if it is from a book?
What about the asm upcodes generated? In some architectures, they are copyrighted, or at least the documentation is considered " intellectual property"; is my C compiler stealing?
Is a hammer or a mallot an obvious creation, or is it stealing from someone else? What about a wheel?
There are people with better and worse social skills. Some can, in a very short period of time, make you feel heard and appreciated. Others can spend ten times as long but struggle to have a similar effect. Does it make sense to 'grade' on effort? On results? On skill? On efforts towards building skills? On loyalty? Something else?
Our instincts are largely tuned to our ancestral environment. Even our social and cultural values that got us to say ~2023 have not caught up yet.
We're looking for 'proof of humanity' in our interactions -- this is part of who we are. But how do we get it with online interactions now?
Maybe we have to give up any expectation of humanity if you can't the person right in front of you?
Strap in, the derivative of the derivative of crazy sh1t is increasing.
Which luckily coincides with our social security and retirement systems collapsing.
In a couple years I'll be in my 70's and starting to write code again for this very reason.
Not LLMs though, I've got my hands full getting regular software to perform :\
There's no shortage of "Chicken Little" technologies that look great on-paper and fail catastrophically in real life. Tripropellant rockets, cryptocurrencies, DAOs, flying cars, the list never ends. There's nothing that stops AI from being similarly disappointing besides scale and expectation (both of which are currently unlimited).
- Seneca, "On Anger"
Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
Unless we can find some way to verify humanity for every message.
A mix of social interaction and cryptographic guarantees will be our saving grace (although I'm less bothered from AI generated content than most).
There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.
So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.
Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.
Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.
I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.
And now people are receiving generated emails. And it’s only getting worse.
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.
Overton window and all that.
I’ve never met a vegetarian who is able to keep quiet about being one but I still got like 30 years left on Earth to meet one :)
- if I meet a vegeterian / vegan, they will tell me that within 48 seconds
- if that doesn’t happen they are not vegeterian / vegan
moving forward I will ask the 2nd group to make sure they eat food that had parents :)
I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).
(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad.
[1] https://ourworldindata.org/grapher/annual-co-emissions-from-... [2] https://en.wikipedia.org/wiki/Gas-fired_power_plant [3] https://www.datacenterdynamics.com/en/news/anthropic-us-ai-n...
I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.
[1] https://cloud.google.com/blog/products/infrastructure/measur...
Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.
People who do that are <0.1% of those who use GenAI when coding. It doesn't create anything usable in production. "Ingesting an entire codebase" isn't even possible when going beyond absolute toy size, and even when it is, the context pollution generally worsens results on top of making the calls very slow and expensive.
If you're going talk about those people you should be comparing them with private jet trips (which of course are many orders of magnitude worse than even those "vibe coders")
I'm fairly certain that your math on this is orders of magnitude off unless you define "prompting all day" in a very non-standard way yet aren't doing so for plane trips, and that 99% of people who "prompt all day" don't even amount to 0.1 plane trip per year.
https://nvidianews.nvidia.com/news/openai-and-nvidia-announc...
Yes!
> The needle moved just a little bit
That's where we disagree.
(Not a tech worker, don't have a horse in this race)
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
Yes!
> it’s important for us to understand why we actually like or dislike something
Yes!
The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.
The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.
> so we can focus on any solutions
Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?
Nvidia to cut gaming GPU production by 30 - 40% starting ...
https://www.reddit.com/r/technology/comments/1poxtrj/nvidia_...
Micron ends Crucial consumer SSD and RAM line, shifts ...
https://www.reddit.com/r/Games/comments/1pdj4mh/micron_ends_...
OpenAI, Oracle, and SoftBank expand Stargate with five new AI data center sites
https://openai.com/index/five-new-stargate-sites/
> Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
I'm a software developer. I don't take planes for work.
> We’ve been compromising on those morals for our whole career.
So your logic seems to be, it's bad, don't do anything, just floor it?
> I’m not an AI apologist.
Really? Have you just never heard the term "wake up call?"
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
Ah yes, crypto, Facebook, privacy destruction etc. Indeed, they made world such a nice place!
i have yet to meet a single tech worker that isn't so
No, this is not the same.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
Those are all real things happening. Not at all comparable to Muskan Vaporware.
You are right, thus downvoted, but still I see current outcry as positive.
We tech workers have mostly been villains for a long time, and foot stomping about AI does not absolve us of all of the decades of complicity in each new wave of bullshit.
Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.
It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.
Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!
I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.
Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
It has enormous benefits to the people who control the companies raking in billions in investor funding.
And to the early stage investors who see the valuations skyrocket and can sell their stake to the bagholders.
It's interesting people from the old technological sphere viciously revolt against the emerging new thing.
Actually I think this is the clearest indication of a new technology emerging, imo.
If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.
"Because people attack it, it therefore means it's good" is a overly reductionist logical fallacy.
Sometimes people resist for good reasons.
In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.
So yes. Vicious.
Your problem is actually with my point, which you didn't address, not really, and instead resort to petty remarks that tries to discredit what's being said.
It's often the last resort.
In fact if it's not "vicious" quote it here.
I don't think that's such a great signal: people were viciously attacking NFTs.
Look at this. I think people need to realize that it's the same kind of folks migrating from gold rush to gold rush. If it's complete bullshit or somewhat useful doesn't really matter to them.
It's insane.
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.
The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.
Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.
There are great FOSS CAD tools available nowadays (LibreCAD, FreeCAD, OpenSCAD etc.), especially for people who only need 2% of a feature set. But then again, I doubt that GP is really in need of a CAD software, or even writing one with the help of Gemini.
Part of the magic of LLMs is getting the exact bespoke tools you need, tailored specifically to your individual needs.
To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.
All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.
The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.
(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)
The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up
> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.
Well then congratulations on being in the 5%. That doesn't really change the point.
You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.
So you expect to see the results of that. The AAA games being released faster, of higher quality, and at a lower cost to develop. You expect Microsoft (one of the major investors and proponents) to be releasing higher quality updates. You expect new AI-developed competitors for entrenched high-value software products.
If all that was true, it doesn't matter what people do or don't argue on the internet, it doesn't matter if people whine, you don't need to proselytize LLMs on the internet, in that world people not using is just an advantage to your own relative productivity in the market.
Surely by now the results will be visible anyway.
So where are they?
You could easily have a side application that people could enable by choice, yet its not happening, we have to roll with this new technology, knowing that its going to make the world a worse place to live in when we are not able to chose how and when we get our information.
Its not just about feeling threatened. its also about feeling like I am going to get cut off from the method I want to use to find information. I don't want a chat bot to do it for me, I want to find and discern information for myself.
Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.
None of these are tech jobs, but we both have used AI to avoid paying for expensive bloated software.
I only use the standard "chat" web interface, no agents.
I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.
I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.
This is the core difference. Just "gluing things together" satisfies you.
It's unacceptable to me.
You don't want to own your code at the level that I want to own mine at.
AI has a massive positive impact, and has for decades.
And as long as that used to be the case, not many people revolted.
AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
The thing about capitalism that is seemingly never taught, but quickly learned (when you join even the lowest rung of the capitalist class, i.e. even having an etsy shop), is that competition lowers prices and kills greed, while being a tool of greed itself.
The conspiracy to get around this cognitive dissonance is "price fixing", but in order to price fix you cannot be greedy, because if you are greedy and price fix, your greed will drive you to undercut everyone else in the agreement. So price fixing never really works, except those like 3 cases out of the hundreds of billions of products sold daily, that people repeat incessantly for 20 years now.
Money flows to the one with the best price, not the highest price. The best price is what makes people rich. When the best price is out of reach though, people will drum up conspiracy about it, which I guess should be expected.
Prices for LLM tokens has also dramatically dropped. Anyone spending more is either using it a ton more or (more likely) using a much more capable model.
Then you have the US, which artificially constrains the supply of new doctors, makes it illegal to open new hospitals without explicit government approval, massively subsidizes loans for education, causing waste, inefficiency, and skyrocketing prices in one specific market…
Fortunately fewer than 4% of humans live there.
Zero incorporation of externalities. Food is less nutritious and raises healthcare costs. Clothing is less durable and has to be re-bought more often, and also sheds microplastics, which raises healthcare costs. Decent TVs are still big-ticket items, and you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs, and you HAVE to pay for internet (if not for content, often just to set up the device), AND everything you do on the device is sent to the manufacturer to sell (this is the actual subsidy driving down prices), which contributes to tech/social media engagement-driven, addiction-oriented, psychology-destroying panopticon, which... raises healthcare costs.
>Prices for LLM tokens has also dramatically dropped.
Energy bill.
You can buy the exact same diet as decades ago. Eggs, flour, rice, vegetable oil, beef, chicken - do you think any of these are "less nutritious"?
People are also fatter now, and live much longer.
>you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs
When you see a device like this does the term 'sonic fidelity' come to mind?
https://www.cohenusa.com/wp-content/uploads/2019/03/blogphot...
https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/
>When you see a device like this does the term 'sonic fidelity' come to mind?
Your straw man is funny, because yes, actually. Certainly when it was new. Vintage speakers are sought-after; well-maintained, and driven by modern sound processing, they sound great. Let alone that I was personally speaking of the types of sets that flat-panel TVs supplanted, the late 90s/early 2000s CRTs.
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Cheap marketing, not much else.
The email appears to be from agentvillage.org which seems like a (TBH) pretty hilarious and somewhat fascinating experiment where various models go about their day - looks like they had a "village goal" to do random acts of kindness and somehow decided to send a thank you email to Rob Pike. The whole thing seems pretty absurd especially given Pike's reaction and I can't help but chuckle - despite seeing Pike's POV and being partial to it myself.
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.
There's nothing in the guidelines to prohibit it https://news.ycombinator.com/newsguidelines.html
Which led to a lot of agreement and rants from others with frustrating stories about their specific workplaces and how it just keeps getting worse by the day. Previously these conversations just popped up among me and the handful of family in tech but clearly now has much broader resonance.
As can be observed in my comment history, I use LLM agentic tools for software dev at work and on my personal projects (really my only AI use case) but cringe whenever I encounter "workslop" as it almost invariably serves to waste my time. My company has been doing a large pilot of 365 Copilot but I have yet to find anything useful, the email writing tools just seems to strip out my personal voice making me sound like I'm writing unsolicited marketing spam.
Every single time I've been using some Microsoft product and think "Hmm, wait maybe the Copilot button could actually be useful here?", it just tells me it can't help or gives me a link to a generic help page. It's like Microsoft deliberately engineered 365 Copilot to be as unhelpful as possible while simultaneously putting a Copilot button on every single visible surface imaginable.
The only tool that actually does something is designed to ruin emails by stripping out personal tone/voice and introducing ambiguity to waste the other person's time. Awesome, thanks for the productivity boost, Microsoft!
“Google said in October that the Gemini app’s monthly active users swelled to 650 million from 350 million in March. AI Overviews, which uses generative AI to summarize answers to queries, has 2 billion monthly users.”
https://www.cnbc.com/2025/12/20/josh-woodward-google-gemini-...
AI/ML Is Now Core Engineering From niche specialty to one of the largest and highest-paid SWE tracks in 2025
off button or not, money in the bank (pay special attention to highest-paid part… ;) )
I think that most of the people who react negatively to AI (myself included) aren't claiming that it's simply a useless slop machine that can't accomplish anything, but rather that its "success" in certain problem spaces is going to create problems for our society
It's pure cognitive dissonance.
"Wow, you're right, I use programs that make decisions and that means I can't be mad about companies who make LLMs."
Surely a 100% failure rate would change your strategy.
Not all of these things are equivalent.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
It's some poor miserable soul sitting at that checkout line 9-to-5 brainlessly scanning products, that's their whole existence. And you don't want this miserable drudgery to be put to end - to be automated away, because you mistake some sad soul being cordial and eeking out a smile (part of their job really) - as some sort of "human connection" that you so sorely lack.
Sounds like you only care about yourself more than anything.
There is zero empathy and there is NOTHING humanist about your world-view.
Non-automated checkout lines are deeply depressing, these people slave away their lifes for basically nothing.
You're right, they should unionize for better working conditions.
Working adults probably have better things to do than rant online about AI all day because of a $300 surcharge on 64 GB DDR5 right now.
And it isn't a $300 surcharge on DDR5. The ram I bought in August (2x16gb DDR5) cost me $90. That same product crept up to around 200+ when I last checked a month or two ago, and is now either out of stock or $400+.
From the "What are the criteria for eligibility and nomination?" section of the "Game Eligibility" tab of the Indie Game Awards' FAQ: [0]
> Games developed using generative AI are strictly ineligible for nomination.
It's not about a "teeny tiny usage of AI", it's about the fact that the organizer of the awards ceremony excluded games that used any generative AI. The Clair Obscur used generative AI in their game. That disqualifies their game from consideration.
You could argue that generative AI usage shouldn't be disqualifying... but the folks who made the rules decided that it was. So, the folks who broke those rules were disqualified. Simple as.
They're free to define their rules however they want, I'm free to disagree on the validity of those rules, and the broader community sentiment will decide whether these awards are worth anything.
You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.
Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.
This backlash isn't going to die. Its going to create a divide so large, you are going to look back on this moment and wish you listened to the concern people are having.
I don't think so. Handcrafted everything and organic everything continue to exist; there is demand for them.
"Being relegated to a niche" is entirely possible, and that's fine with me.
People read too much sci-fi, I hope you just forgot your /s.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
My reaction was about the same.
AI village is literally the embodiment of what black mirror tried to warn us about.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
It might help to look at global power usage, not just the US, see the first figure here:
https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...
There isn't an inflection point around 2022: it has been rising quickly since 2010 or so.
Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.
Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.
Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.
Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.
Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).
So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]
[1] - https://ourworldindata.org/energy-production-consumption
Yes, data center efficiency improved dramatically between 2010-2020, but the absolute scale kept growing. So you're technically both right: efficiency gains kept/unit costs down while total infrastructure expanded. The 2022+ inflection is real though, and its not just about AI training. Inference at scale is the quiet energy hog nobody talks about enough.
What bugs me about this whole thread is that it's turning into "AI bad" vs "AI defenders," when the real question should be: which AI use cases actually justify this resource spike? Running an LLM to summarize a Slack thread probably doesn't. Using it to accelerate drug discovery or materials science probably does. But we're deploying this stuff everywhere without any kind of cost/benefit filter, and that's the part that feels reckless.
"yeah but they became efficient at it by 2012!"
How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?
To stretch the analogy, all the "babies" in the "bathwater" of youtube that I follow are busy throwing themselves out by creating or joining alternative platforms, having to publicly decry the actions Google takes that make their lives worse and their jobs harder, and ensuring they have very diversified income streams and productions to ensure that WHEN, not IF youtube fucks them, they won't be homeless.
They mostly use Youtube as an advertising platform for driving people to patreon, nebula, whatever the new guntube is called, twitch, literal conventions now, tours, etc.
They've been expecting youtube to go away for decades. Many of them have already survived multiple service deaths, like former Vine creator Drew Gooden, or have had their business radically changed by google product decisions already.
I am not accurate about google but facebook definitely has some of the most dystopian tracking I have heard. I might read the facebook files some day but the dystopian fact that facebook tracks young girls and sees if that they delete their photos, they must feel insecure and serves them beauty ads is beyond predatory.
Honestly, my opinion is that something should be done about both of these issues.
But also its not a gotcha moment for Rob pike that he himself was plotting up the ads or something.
Regarding the "iphone kids", I feel as if the best thing is probably an parental level intervention rather than waiting for an regulatory crackdown since lets be honest, some kids would just download another app which might not have that regulation.
Australia is implementing social media ban basically for kids but I don't think its gonna work out but everyone's looking at it to see what's gonna happen basically.
Personally I don't think social media ban can work if VPN's just exist but maybe they can create such an immense friction but then again I assume that this friction might just become norm. I assume many of you guys must have been using internet from the terminal days where the friction was definitely there but the allure still beat the friction.
There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.
If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.
It is like an arms race. Everyone would have been better off if people just never went to war, but....
A lot of advertising is telling people about some product or service they didn't even know existed though. There may not even be a competitor to blame for an advertising arms race.
You're tilting at windmills here, we can't go back to barter.
And before the LLM craze there was a constant focus on efficiency. Web search is (was?) amazingly efficient per query.
I mean, buying another pair of sneakers you don't need just because ads made you want them doesn't sound like the best investment from a societal perspective. And I am sure sneakers are not the only product that is being bought, even though nobody really needs them.
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
We need to find a way to stop contributing to the destruction of the planet soon.
I don't work for any of these companies, but I do purchase things from Amazon and I have an apple phone. I think the best we can do is minimize our contribution to it. I try to limit what services I use from this companies, and I know it doesnt make much of a differnce, but I am doing what I can.
I'm hoping more people that need to be employed by tech companies can find a way to be more selective on who they employ with.
Certainly. But this, IMO, is not the reason for the criticism in the comments. If Rob ranted about AI, about spam, slop, whatever, most of those criticizing his take would nod instead.
However, the one and only thing that Rob says in his post is "fuck you people who build datacenters, you rape the planet". And this coming from someone who worked at Google from 2004 to 2021 and instead could have picked any job anywhere. He knew full well what Google was doing; those youtube videos and ad machines were not hosted in a parallel universe.
I have no problem with someone working at Google on whatever with full knowledge that Google is pushing ads, hosting videos, working on next gen compute, LLM, AGI, whatever. I also have no problem with someone who rails against cloud compute, AI, etc. and fights it as a colossal waste or misallocation of resources or whatever. But not when one person does both. Just my 2c, not pushing my worldview on anyone else.
If rob pike was asked about these issues of systemic addiction and others where we can find things google was bad at. I am sure that he wouldn't defend google about these things.
Maybe someone can mail a real message asking Rob pike genuinely (without any snarkiness that I feel from some comments here) about some questionable google things and I am almost certain that if those questions are reasonable, rob pike will agree that some actions done by google were wrong.
I think its just that rob pike got pissed off because an AI messaged him so he got the opportunity to talk about these issues and I doubt that he got the opportunity to talk / someone asking him about some other flaws of google / systemic issues related to it.
Its like, Okay, I feel like there is an issue in the world so I talk about it. Now does that mean that I have to talk about every issue in the world, no not really. I can have priorities in what issues I wish to talk about.
But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
And some people like rob pike who left google because of (ideological reasons perhaps, not sure?) wouldn't really care about the fallback and like you say, its okay to collect checks from organization even if they critize
Honestly Google's lucky that they got rob pike instead of vice versa from my limited knowledge.
Golang is such a brilliant language and ken thompson and rob pike are consistently some of the best coders and their contributions to golang and so many other projects is unparalleled.
I don't know much about rob pike as compared to Ken thompson but I assume he is really great too! Mostly I am just a huge golang fan.
Im not saying nobody has the right to criticize something they are supporting, but it does say something about our choices and how far we let this problem go before it became too much to solve. And not saying the problem isn't solvable. Just saying its become astronomically more difficult now then ever before.
I think at the very least, there is a little bit of cringe in me every time I criticize the very thing I support in some way.
Being a hypocrite makes you a bad person sometimes. It doesn't actually change anything factual or logical about your arguments. Hypocrisy affects the pathos of your argument, but not the logos or ethos! A person who built every single datacenter would still be well qualified to speak about how bad datacenters are for the environment. Maybe their argument is less convincing because you question their motives, but that doesn't make it wrong or invalid.
Unless HNers believe he is making this argument to help Google in some way, it doesn't fucking matter that google was also bad and he worked for them. Yes he worked for google while they built out datacenters and now he says AI datacenters are eating up resources, but is he wrong?. If he's not wrong, then talk about hypocrisy is a distraction.
HNers love arguing to distract.
"Don't hate the player, hate the game" is also wrong. You hate both.
Sometimes facts and logic can only get you so far.
With all due respect, being moral isn't an opinion or agreement about an opinion, it's the logic that directs your actions. Being moral isn't saying "I believe eating meat is bad for the planet", it's the behaviour that abstains from eating meat. Your moral is the set of statements that explains your behaviour. That is why you cannot say "I agree that domestic violence is bad" while at the same time you are beating up your spouse.
If your actions contradict your stated views, you are being a hypocrite. This is the point that people in here are making. Rob Pike was happy working at Google while Google was environmentally wasteful (e-waste, carbon footprint and data center related nastiness) to track users and mine their personal and private data for profit. He didn't resign then nor did he seem to have caused a fuss about it. He likely wasn't interested in "pointless politics" and just wanted to "do some engineering" (just a reference to techies dismissing or critising folks discussing social justices issues in relation to big tech). I am shocked I am having to explain this in here. I understand this guy is an idol of many here but I would expect people to be more rational on this website.
We got to this point by not looking at these problems for what they are. Its not wrong to say something is wrong and it needs to be addressed.
Doing cool things, without looking at whether or not we should doesn't feel very responsible too me esp. if it impacts society in a negative way.
For example, Rob seems not to realize that the people who instructed an AI agent to send this email are a handful of random folks (https://theaidigest.org/about) not affiliated with any AI lab. They aren't themselves "spending trillions" nor "training your monster". And I suspect the AI labs would agree with both Rob and me that this was a bad email they should not have sent.
Data centers are not another thing when the subject is data centers.
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
Ian Lance Taylor on the other hand appeared to have quit specifically because of the "AI everything" mandate.
Just an armchair observation here.
Did you sell all of your stock?
Honestly I believe that google might be one of the few winners from the AI industry perhaps because they own the whole stack top to bottom with their TPU's but I would still stray away from their stock because their P/E ratio might be insanely high or something
Their p/e ratio has almost doubled in just a year which isn't a good sign https://www.macrotrends.net/stocks/charts/googl/alphabet/pe-...
So like, we might be viewing the peaks of the bubble and you might still hold the stocks and might continue holding it but who knows what happens after the stock depreciates value due to AI Bubble-like properties and then you might regret as why you didn't sell it but if you do and google's stock rises, you might still regret.
I feel as if grass is always greener but not sure about your situation but if you ask me, you made the best out of the situation with the parameters you had and logically as such I wouldn't consider it "unfortunately" but I get what you mean.
But with remote work it also became possible to get paid decently around here without working there. Prior I was bound to local area employers of which Google was the only really good one.
I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
After 2016 or so the place just started to go downhill faster and faster though. People who worked there in the decade prior to me had a much better place to work.
I am super curious as I don't get to chat with people who have worked at google as so much so pardon me but I got so many questions for you haha
> It was a weird place to work
What was the weirdness according to you, can you elaborate more about it?
> I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
For context, can you please talk more about it :p
> After 2016 or so the place just started to go downhill faster and faster though
What were the reasons that made them go downhill in your opinion and in what ways?
Naturally I feel like as organizations move and have too many people, maybe things can become intolerable to work but I have heard it be described as it depends where and in which project you are and also how hard it can be to leave a bad team or join a team with like minded people which perhaps can be hard if the institution gets micro-managed at every level due to just its sheer size of employees perhaps?
Not at all. I actually prefer in-office. And left when Google was mostly remote. But remote opened up possibilities to work places other than Google for me. None of them have paid as well as Google, but have given more agency and creativity. Though they've had their own frustrations.
> What was the weirdness according to you, can you elaborate more about it?
I had a 10-15 year career before going there. Much of what is accepted as "orthodoxy" at Google rubbed me the wrong way. It is in large part a product of having an infinite money tree. It's not an agile place. Deadlines don't matter. Everything is paid for by ads.
And as time goes on it became less of an engineering driven place and more of a product manager driven place with classical big-company turf wars and shipping the org chart all over the place.
I'd love to get paid Google money again, and get the free food and the creature comforts, etc. But that Google doesn't exist anymore. And they wouldn't take my back anyways :-)
"AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.
And so it's again, a kind of whataboutism that pushes the scale of the issue out of the way in order to make some sort of moral argument which misses the whole point.
BTW in my first year at Google I worked on a change where we made some optimizations that cut the # of CPUs used for RTB ad serving by half. There were bonuses and/or recognition for doing that kind of thing. Wasteful is a matter of degrees.
It wasn't only about serving those ads though, traditional machine-learning (just not LLMs) has always been computationally expensive and was and is used extensively to optimize ads for higher margins, not for some greater good.
Obviously, back then and still today, nobody is being wasteful because they want to. If you go to OpenAI today and offer them a way to cut their compute usage in half, they'll praise you and give you a very large bonus for the same reason it was recognized & incentivized at Google: it also cuts the costs.
Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.
If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.
I find it difficult to express how strongly I disagree with this sentiment.
LLMs need to burn significant amounts of power for every inference. They're exponentially more power hungry than searches, database lookups, or even loads from disk.
I.e., they are proud to have never intentionally used AI and now they feel like they have to maintain that reputation in order to remain respected among their close peers.
But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)
The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.
The value you derive is the ability to make your car move. If you derived no value from gas, why would you spend money on it?
If they just wanted ads blasted at them, and nothing else, they'd be doing something else, like, say, watching cable TV.
If they want LLM, you probably don't have to advertise them as much
No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.
The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI
We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)
So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.
That's just not true... When a mother nurses her child and then looks into their eyes and smiles, it takes the utmost in cynical nihilism to claim that is harmful.
Well the people who burnt compute got it from money so they did burn money.
But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)
So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.
But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.
And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.
So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.
Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.
Shaking my head...
Just like the invention of Go.
When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.
These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too
She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.
When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.
I jumped in the conversation and said nobody's gonna force her to make ghibli images.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)
Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.
Automated systems sending people unsolicited, unwanted emails is more commonly known as spam.
Especially when the spam comes with a notice that it is from an automated system and replies will be automated as well.
FFS. AI's greatest accomplishment is to debase and destroy.
Trillions of dollars invested to bring us back to the stone age. Every communications technology from writing onward jammed by slop and abandoned.
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
Yes so this is why the reason why person card/letters really matter because most people sheldom get any and if you know a person in your life / in any (community/project) that you deeply admire, sending them a handwritten mail can be one of the highest gestures which shows that you took the time out of your day and you really cared about them so much in a way.
That's my opinion atleast.
That interpretation doesn't save the comment, it makes it totally off topic.
Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.
Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!
This is an all too common pattern.
Easier for a politician to latch onto manufacturing jobs.
You don't just chuck ore into a furnace and wait for a few seconds in reality.
I'd guess that this is also an area where the perception makes a bigger difference than the reality.
The astroturf in this thread is unreal. Literally. ;)
He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like
And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can't be the latter. If it is the latter, it can't be the former.
That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.
Yes, much like it's not the gun's fault when someone is killed by a gun. And, yet, it's pretty reasonable to want regulation around these tools that can be destructive in the wrong hands.
What're the long term consequences of climate change? Do we even care anymore to your original point?
Don't get me wrong, this field is doing damage on a couple of fronts - but climate change is certainly one of them.
Revolutions always came with vague (or concrete) threats as far as I know.
I never asserted that AI is either of those things
You mean except the bit about how GenAI included his work in its training data without credit or compensation?
Or did you disagree with the environmental point that you failed to keep reading?
Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.
> nothing he complains about is specific to GenAI
(see also all their other scattered gesturings towards Google and their already existing data centers)
A lot can be said about this take, but claiming that it doesn't directly and specifically address Pike's "argument", I simply don't think is true.
I generally find that when (hyper?)focusing on fallacies and tropes, it's easy to lose sight of what the other person is actually trying to say. Just because people aren't debating in a quality manner, doesn't mean they don't have any points in there, even if those points are ultimately unsound or disagreeable.
Let's not mistake form for function. People aren't wrong because they get their debating wrong. They're wrong because they're wrong.
[0] in quotes, because I read a rant up there, not an argument - though I'm sure if we zoom way in, the lines blur
How so? He’s talking about what happened to him in the context of his professional expertise/contributions. It’s totally valid for him to talk about this subject. His experience, relevance, etc. are self apparent. No one is saying “because he’s an expert” to explain everything.
They literally (using AI) wrote him an email about his work and contributions. His expertise can’t be removed from the situation even if we want to.
not having a good spam filter is a kinda funny reason for somebody to have a crash out.
Except it definitely is, unless you want to ignore the bubble we're living in right now.
https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
It seems video streaming, like Youtube which is owned by Google, uses much more energy than generative AI.
1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs
2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.
3) I have concerns that the "national center for AI" might have some bias
I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.
Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.
The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.
Neither is comparing text output to streaming video
How many tokens do you use a day?
https://www.youtube.com/results?search_query=funny+3d+animal...
(That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)
I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).
This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.
And it probably isn't astroturf, way too many people just think this way.
We want free services and stuff, complain about advertising / sign up for the google's of the world like crazy.
Bitch about data-centers while consuming every meme possible ...
The points you raise, literally, do not affect a thing.
It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.
Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
The amount of “he’s not allowed to have an opinion because” in this thread is exhausting. Nothing stands up to the purity test.
He sure was happy enough to work for them (when he could work anywhere else) for nearly two decades. A one line apology doesn't delete his time at Google. The rant also seems to be directed mostly if not exclusively towards GenAI not Google. He even seems happy enough to use Gmail when he doesn't have to.
You can have an opinion and other people are allowed to have one about you. Goes both ways.
Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away. (Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)
> You spent your whole life breathing, and now you're complaining about SUVs? What a hypocrite.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
That dam took 10 years to build and cost $30B.
And OpenAI needs more than ten of them in 7 years.
You needed to read only conservative resources to not be aware that such criticism exists.
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
As if there isn't a massive pro AI hype train. I watched an nfl game for the first time in 5 years, and saw no less than 8 AI commercials. AI Is being forced on people.
In commercials people were using it to generate holiday cards for God sake. I can't imagine something more cold and impersonal. I don't want that garbage. Our time on earth is to short to wade through LLM slop text
I noticed a pattern after a while. We'd always have themed toys for the Happy Meals, sure, sometimes they'd be like ridiculously popular with people rolling through just to see what toys we had.
Sometimes, they wouldn't. But we'd still have the toys, and on top of that, we'd have themed menus and special items, usually around the same time as a huge marketing blitz on TV. Some movie would be everywhere for a week or two, then...poof!
Because the movies that needed that blitz were always trash. Just forgettable, mid, nothing movies.
When the studios knew they had a stinker, they'd push the marketing harder to drum up box office takings, cause they knew no one was gonna buy the DVD.
Good products speak for themselves. You advertise to let people know, sure, but you don't have to be obnoxious about it.
AI products almost all have that same desperate marketing as crappy mid-budget films do. They're the equivalent of "The Hobbit branded menus at Dennys". Because no one really gives a shit about AI. For people like my mom, AI is just a natural language Google search. That's all it's really good at for the average person.
The AI companies have to justify the insane money being blown on the insane gold rush land grab at silicon they can't even turn on. Desperation, "god this bet really needs to pay off".
It all stinks of resume-driven development
In windows, Co-polit is installed and its very difficult to remove.
Don't act like this isn't a problem, its a very simple premise.
And companies do force it.
You're breaking the expected behavior of something that performed flawlessly for 10+ years, all to deliver a worse, enshitified version of the search we had before.
For now I'm sticking to noai.duckduckgo.com
But I'm sure they'll rip that away eventually too. And then I'll have to run a god dang local search engine just to search without AI. I'll do it, but it's so disappointing.
Unless your version of reason is clinical. then yeah, point taken. Good luck living on that island where nothing else matters but technological progress for technology's sake alone.
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
> Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
Why would you lie: https://imgur.com/a/1AEIQzI ???
For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:
> Summary
> Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.
The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit (often times more than just one of them) generating more requests, and, more importantly, it will ask more of your time, the human, whose cumulative energy expenditure to be able to ask to be begin with is quite significant – and that you then will have not to spend on other things that a LLM is not able to do for you.
Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.
But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).
Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))
Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).
Talking about "condescending":
> super ridiculous :-)))
It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.
This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.
For instance, in the Gemini screenshot, the claim for 100-500x more resource usage for AI queries comes from water usage, however it's not clear to me why data center water usage for AI queries would be 100-500x more than a Google search when power usage for an AI query is supposedly only 10-30x more than a Google search. Is water usage and CO2 footprint not derived from power consumption? Did the LLM have to drink as much water while thinking as I did while researching the original claim?
The 10-30x more power consumption claim seems to come from this scientific paper [0] from late 2023 which cites a news article which quotes Alphabet's chairman as saying 'a large language model likely cost 10 times more than a standard keyword search, [though fine-tuning will help reduce the expense quickly]'. Editorialising the quote is not a good look for a scientific paper. The paper also cites a news letter from an analyst firm [1] that performs a back of the envelope calculation to estimate OpenAI's costs, looks at Google's revenue per search, and estimates how much it would cost Google to add an AI query for every Google search. Treating it like a Fermi Problem is reasonable I guess, you can get within an order of magnitude if your guesstimates are reasonable. The same analyst firm did a similar calculation [2] and came to the conclusion that training a dense 1T model costs $300m. It should be noted that GPT-4 cost 'more than $100m' and it has been leaked that it's a 1.8T MoE. LLama 3.1 405B was around 30M GPU hours, likely $30-60m. DeepSeek, a 671B MoE, was trained for around $5m. However, while this type of analysis is fine for a news letter, citing it to see how many additional servers Google would need to add an AI query to every search, taking the estimated power consumption of those servers, and deriving a 6.9–8.9 Wh figure per request for the amount of search queries Google receives is simply beyond my comprehension. I gave up trying to make sense of what this paper is doing, and this summary may be a tad unfair as a result. You can run the paper through Gemini if you would prefer an unbiased summary if you prefer :-).
The paper also cites another research paper [3] from late 2022 which estimates a dense 176b parameter model (comparable to GPT-3) uses 3.96 Wh per request. They derive this figure by running the model in the cloud. What a novel concept. Given the date of the paper, I wouldn't be surprised if they ran the model in the original BF16 weights, although I didn't check. I could see this coming down to 1 Wh per request when quantised to INT4 or similar, and with better caching/batched requests/utilisation/modern GPUs/etc I could see this getting pretty close to the often quoted [4, from 2009 mind] 0.3 Wh per Google search.
Google themselves [5] state the median Gemini text prompt uses 0.24 Wh.
I simply don't see where 100x is coming from. 10x is something I could believe if we're factoring in training resource consumption as some extremely dodgy napkin maths is leading me to believe a moderately successful 1T~ model gets amortised to 3 Wh per prompt which subjectively is pretty close to the 3x claim I've ended up defending. If we're going this route we'd have to include the total consumption for search too as I have no doubt Google simply took the running consumption divided by amount of searches. Add in failed models, determine how often either a Google search or AI query is successful, factor in how much utility the model providing the information provides as it's clearly no longer just about power efficiency, etc. There's a lot to criticise about GenAI but I really don't think Google searches being marginally more power efficient is one of them.
[0] https://www.sciencedirect.com/science/article/pii/S254243512...
[1] https://newsletter.semianalysis.com/p/the-inference-cost-of-...
[2] https://newsletter.semianalysis.com/p/the-ai-brick-wall-a-pr...
[3] https://arxiv.org/abs/2211.02001
[4] https://googleblog.blogspot.com/2009/01/powering-google-sear...
[5] https://cloud.google.com/blog/products/infrastructure/measur...
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
People being more productive with writing code, making music or writing documents fpr whatever is not a improvement for them and therefore for society?
Or do you claim that is all imaginary?
Or negated by the energy cost?
And all at significant opportunity cost (in terms of computing and investment)
If it was as life altering as they claim where's that novel work of art (in your examples..of code, music or literature) that truly could not have been produced without GenAI and fundamentally changed the art form ?
Surely, with all that ^increased productivity^ we'd have seen the impact equivalent of linux, apache, nginx, git, redis, sqlite, ... Etc being released every couple of weeks instead of yet another VSCode clone./s
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
"Google deletes net-zero pledge from sustainability website"
as noticed by the Canadian National Observer
https://www.nationalobserver.com/2025/09/04/investigations/g...
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.
Home Computer enthusiasts know better. Local storage is important to ownership and freedom.
In which case he’s got nothing to complain about, making this rant kind of silly.
https://news.ycombinator.com/item?id=46389444
397 points 9 hours ago | 349 comments
The energy demands of existing and planned data centres are quite alarming
The enormous quantity of quickly deprecating hardware is freaking out finance people, the waste aspect of that is alarming too.
What is your "push back"?
This link has a great overview of why generative AI is not really a big deal in environmental terms: https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...
GenAI is dramatically lower impact on the environment than, say, streaming video is. But you don't see anywhere near the level of environmental vitriol for streaming video as for AI, which is much less costly.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
https://www.ndc-garbe.com/data-center-how-much-energy-does-a...
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...
Here is another helpful link with calculations going over similar things: https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
Throughout this post I’ll assume the average ChatGPT query uses 0.3 Wh of energy, about the same as a Google search used in 2009.
Obviously that's roughly one kilowatt for one second. I distinctly recall Google proudly proclaiming at the bottom of the page that its search took only x milliseconds. Was I using tens-hundreds of kW every time I searched something? Or did most of the energy usage come during indexing/scraping? Or is there another explanation?It is the training of models, is it not, that requires huge quantities of electricity. Already driving up prices for consumers.
OpenAI (that name is Orwellian) wants 25GW over five years, if memory serves. That is not for powering ChatGPT queries
Also the huge waste of gazillion of dollars spent on computer gear (in data centres) that will probably depreciate to zero in less than five years.
This is a useful technology, but a whole lot of greed heads are riding it to their doom. I hope they do not take us on their ride
Probably hit the flamewar filter.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.
Don’t upvote sealions.
According to merriam-webster, sealioning/sealions are:
> 'Sealioning' is a form of trolling meant to exhaust the other debate participant with no intention of real discourse.
> Sealioning refers to the disingenuous action by a commenter of making an ostensible effort to engage in sincere and serious civil debate, usually by asking persistent questions of the other commenter. These questions are phrased in a way that may come off as an effort to learn and engage with the subject at hand, but are really intended to erode the goodwill of the person to whom they are replying, to get them to appear impatient or to lash out, and therefore come off as unreasonable.
It also doesn’t help their case that they somehow have a such a starkly contradictory opinion on something they ostensibly don’t know anything/are legitimately asking questions about. They should ask a question or two and then just listen.
It’s just one of those things that falls under “I know it when I see it.”
It fundamentally changed how I viewed debates etc. from a young age so I never really sea-lioned that much hopefully.
But if I had to summarize the most useful and on topic quote from the book its that.
"I may be wrong, I usually am"
Lines like this give me a humble nature to fall back on. Even socrates said that the only thing I know is that I know nothing so if he doesn't know nothing, then chances are I can be wrong about things I know too.
Knowing that you can be wrong gives an understanding that both of you are just discussing and not debating and as such the spirit becomes cooperative and not competitive.
Although in all fairness, I should probably try to be a more keen listener but its something that I am working on too, any opinions on how to be a better listener too perhaps?
I like the “does it need to be said by me right now?” test a lot when I can actually remember to apply it in the moment. I forgot where I learned it but somebody basically put it like this: Before you say anything, ask yourself 3 questions
1. Does it need to be said?
2. Does it need to be said by me?
3. Does it need to be said by me right now?
You work your way down the list one at a time and if the answer is still yes by the time you hit 3, then go ahead.
(I make no comment on the claims about Rob Pike, but look forward to people arguing I have the wrong opinion on him regardless ;)
If anyone were actually interested in a conversation there is probably one to be had about particular applications of gen-AI, but any flat out blanket statements like his are not worthy of any discussion. Gen-AI has plenty of uses that are very valuable to society. E.g. in science and medicine.
Also, it's not "sealioning" to point out that if you're going to be righteous about a topic, perhaps it's worth recognizing your own fucking part in the thing you now hate, even if indirect.
Would that be the part of the post where he apologizes for his part in creating this?
Of course they could. (1) People are capable of changing their minds. His opinion of data centers may have been changed recently by the rapid growth of data centers to support AI or for who knows what other reasons. (2) People are capable of cognitive dissonance. They can work for an organization that they believe to be bad or even evil.
Cognitive dissonance is, again, exactly my point. If you sat him down and asked him to describe in detail how some guy setting up a server rack is similar to a rapist, I’m pretty confident he’d admit the metaphor was overheated. But he didn’t sit himself down to ask.
I think "you people" is meant to mean the corporations in general, or if any one person is culpable, the CEOs. Who are definitely not just "some guy setting up a server rack."
His viewpoints were always grounded and while he may have some opinions about Go and programming, he genuinely cares about the craft. He’s not in it to be rich. He’s in it for the science and art of software engineering.
ROFL his website just spits out poop emoji's on a fibonacci delay. What a legend!
Craft is gone. It is now mass manufactured for next to nothing in a quality that can never be achieved by hand coding.
(/s about quality, but you can see where it’s going)
Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable
equipment while blowing up society, yet taking the time to have your vile
machines thank me for striving for simpler software.
That's Rob Pike, having spent over 20 years at Google, must know it to be the home of the non-monetary wholesome recyclable equipment brought about by economics not formed by an ubiquitous surveillance advertising machine.> To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
You don't have to purely associate him with Google to understand the rant as understandable given AI spam, and yet entirely without a shred of self-awareness.
And he is allowed to work for google and still rage against AI.
Life is complicated and complex. Deal with it.
The specific quote is "spending trillions on toxic, unrecyclable equipment while blowing up society." What has he supported for the last 20+ years if not that? Did he think his compute ran on unicorn farts?
Clearly he knows, since he self-replies "I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault."
Just because someone does awesome stuff, like Rob Pike has, doesn't mean that their blind spots aren't notable. You can give him a pass and the root comment sure wishes everyone would, but in doing so you put yourself in the position of the sycophant letting the emperor strut around with no clothes.
I know who Rob Pike is.
Rob is not strutting around with no clothes, he literally has decades upon decades of contributions to the industry.
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
(This is taking the view that "other companies" are the consumers of AI, and actual end-consumers are more of a by-product/side-effect in the current capital race and their opinions are largely irrelevant.)
Elections on autocratic administrations are a joke on democracy.
this president? :)))
Assuming someone further to the right like Nick Fuentes doesn't manage to take over the movement.
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
https://theaidigest.org/village/goal/do-random-acts-kindness
They send 150ish emails.
In what universe is another unsolicited email an act of kindness??!?
Where form is more important than function
Where pretense passes for authentic
Where bullshit masquerades as logic
Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model.
I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about.
As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.
No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.
And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.
I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.
> open source foundations
Those dreams end. (Speaking from experience.)
> education, healthcare tech
Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.
> small companies solving real problems
I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.
> The "we all have to" framing is a convenient way to avoid examining your own choices.
This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.
> And it's telling that this framing always seems to appear when someone is defending their own employer.
(I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)
> You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")
I did!
> so you clearly believe these distinctions matter even though Google itself is an AI company
Yes, I do believe that.
Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".
The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.
The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.
I don't perceive it that way. In other words, I don't think I've had a choice there. Once you consider other folks that you are responsible for, and once you consider your own mental health / will to live, because those very much play into your availability to others (and because those other possible workplaces do impact mental health! I've tried some of them!), then "free choice of employer" inevitably emerges as illusory. It's way beyond mere "inconvenience". It absolutely ties into morals, and meaning of one's life.
The universe is not responsible for providing me with employment that ensures all of: (a) financial safety/stability, (b) self-realization, (c) ethics. I'm responsible for searching the market for acceptable options, and shockingly, none seem to satisfy all three anymore. It might surprise you, but the trend for me has been easing up on both (a) and (c) (no mistake there), in order to gain territory on (b). It turns out that my mental health, my motivation to live and work are the most important resources for myself and for those around me. The fact has been a hard lesson that I've needed to trade not only money, but also a pinch of ethics, in order to find my place again. This is what I mean by "inevitable prostitution to an extent". It means you give up something unquestionably important for something even more important. And you're never unaware of it, you can't really find peace with it, but you've tried the opposite tradeoffs, and they are much worse.
For example, if I tried to do something about healthcare or education in my country, that might easily max out the (b) and (c) dimensions simultaneously, but it would destroy my ability to sustain my family. (It's not about "big tech money" vs. "honest pay", but "middle-class income" vs. poverty.) And that question entirely falls into "morality": it's responsibility for others.
> Anthropic and OpenAI also created products with clear utility.
Extremely constrained utility. (I realize many people find their stuff useful. To me, they "improve" upon the wrong things, and worsen the actual bottlenecks.)
> You're claiming Google's useful products excuse their harms,
(mitigate, not excuse)
> but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.
First, it's obviously a value judgment! We're not talking theoretical principles here. It's the direct, rubber-meets-the-road impact I'm interested in.
Second, Google is multi-dimensional. Some of their activity is inexcusably bad. Some of it is excusable, even "neat". I hate most of their stuff, but I can't deny that people I care about have benefited from some of their products. So, all Google does cannot be distilled into a single scalar.
At the same time, pure AI companies are one-dimensional, and I assign them a pretty large magnitude negative value.
Google's DeepMind has been at the forefront of AI research for the past 11+ years. Even before that, Google Brain was making incredible contributions to the field since 2011, only two years after the release of Go.
OpenAI was founded in response to Google's AI dominance. The transformer architecture is a Google invention. It's not an exaggeration to claim Google is one of the main contributors to the insanely fast-paced advancements of LLMs.
With all due respect, you need some insane mental gymnastics to claim AI companies are "unquestionably cancer" while an adtech/analytics borderline monopoly giant is merely a "mixed bag".
Perhaps. I dislike google (have disliked it for many years with varying intensity), but they have done stuff where I've been compelled to say "neat". Hence "mixed bag".
This "new breed of purely AI companies" -- if this term is acceptable -- has only ever elicited burning hatred from me. They easily surpass the "usual evils" of surveillance capitalism etc. They deceive humanity at a much deeper level.
I don't necessarily blame LLMs as a technology. But how they are trained and made available is not only irresponsible -- it's the pinnacle of calculated evil. I do think their evil exceeds the traditional evils of Google, Facebook, etc.
OP says, it is jarring to them that Pike is as concerned with GenAI as he is, but didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade. Doesn't sound ridiculous to me.
That said, I get that everyone's socio-political views change are different at different points in time, especially depending on their personal circumstances including family and wealth.
That's the main disagreement, I believe. I'm definitely not an indiscriminate fan of Google. I think Google has done some good, too, and the net output is "mostly bad, but with mitigating factors". I can't say the same about purely AI companies.
Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ?
It’s like saying that it’s cool because you worked on some non-evil parts of a terrible company.
I don’t think it’s right to work for an unethical company and then complain about others being unethical. I mean, of course you can, but words are hollow.
If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. Our entire world is owned by ads now, with digital and physical garbage polluting the internet and every open space in the real world around us. The marketing is mind-numbing, yet persuasive and well-calculated, a result of psychologists coming up with the best ways to abuse a mind into just buying the product over the course of a century. A total ban on commercial advertising would undo some of the damage done to the internet, reduce pointless waste, lengthen product lifecycles, improve competition, temper unsustainable hype, cripple FOMO, make deceptive strategies nonviable. And all of that is why it will never be done.
but wait, in a few months, "AI" will be be funded entirely by advertising too!
And I'd promptly say: Ads are propaganda, and a security risk because it executes 3rd party code on your machine. All of us run adblockers.
There was no need for me to point out that ads are also their revenue generator. They just had a burning moral question before they proceeded to interop with the propaganda delivery system, I guess.
It would lead to unnecessary cognitive dissonance to convince myself of some dumb ideology to make me feel better about wasting so much of my one (1) known life, so I just take the hit and be honest about it. The moral question is what I do about it, if I intervene effectively to help dismantle such systems and replace them with something better.
Of course, the scale is different but the sentiment is why I roll my eyes at these hypocrites.
If you want to make ethical statements then you have to be pretty pure.
I’m sorry but comparing Google to Stalin or Hitler makes me completely dismiss your opinion. It’s a middle school point of view.
That sums up 2025 pretty well.
I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.
His rant reads as deeply human. I don't think that's something to apologize for.
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
I think the United States is a force for evil on net but I still live and pay taxes here.
> I think the United States is a force for evil on net
Yes I could tell that already
This must be a comforting mental gymnastics.
UTF-8 is nice but let's be honest, it's not like he was doing charitable work for the poor.
He worked for the biggest Adware/Spyware company in tech and became rich and famous doing it.
The fact that his projects had other uses doesn't absolve the ethical concerns IMO.
> I think the United States is a force for evil on net but I still live and pay taxes here.
I think this is an unfair comparison. People are forced to pay taxes and many can't just get up an leave their country. Rob on the other hand, had plenty of options.
If you are born in a country and not directly contributing to the bad things it may be doing, you are blame free.
Big difference.
I never worked for Google, I never could due to ideological reasons.
FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.
& re: countries: in some sense I am contributing. my taxes pay their armies
And regarding countries, this is a silly argument. You are forced to pay taxes to the nation you are living in.
Pike, stone throwing, glass houses, etc.
The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.
Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.
Pike's main point is that training AI at that scale requires huge amounts of resources. Markov chains did not.
There are so many chickens that are coming home to roost where LLMs was just the catalyst.
no it really is. If you took away training costs, OpenAI would be profitable.
When I was at meta they were putting in something like 300k GPUs in a massive shared memory cluster just for training. I think they are planning to triple that, if not more.
[1]: https://epoch.ai/gradient-updates/how-much-energy-does-chatg...
If you fly a plane a millimeter, you're using less energy than making a slice of toast; would you also say that it's accurate that all global plane travel is more efficient than making toast?
Now, I don't think he was writing a persuasive piece about this here, I think he was just venting. But I also feel like he has a reason to vent. I get upset about this stuff too, I just don't get emails implying that I helped bring about the whole situation.
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
But...just to make sure that this is not AI generated too.
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
I'm so tired of being called a luddite just for voicing reservations. My company is all in on AI. My CEO has informed us that if we're not "100% all in on AI", then we should seek employment elsewhere. I use it all day at work, and it doesn't seem to be nearly enough for them.
Elixir has also been working surprisingly well for me lately.
It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.
The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.
Types help. Good tests help better.
I can. Bitcoin was and is just as wasteful.
Prepare for a future where you can’t tell the difference.
Rob pikes reaction in immature and also a violation of HN rules. Anyone else going nuclear like this would be warned and banned. Comment why you don’t like it and why it’s bad, make thoughtful discussion. There’s no point in starting a mob with outbursts like that. He only gets a free pass because people admire him.
Also, What’s happening with AI today was an inevitability. There’s no one to blame here. Human progress would eventually cross this line.
It is outside individual human decision making in a way, but I never said this and I never said anything about spirits or religion.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
I'm glad Dr Pike found his inner Linus
https://theaidigest.org/village/goal/do-random-acts-kindness
I think I'll build one of my own and let others use it.
In this case, the words from the LLM have no genuine appreciation, it's mocking or impersonating that appreciation. Do the people that created the prompt have some genuine appreciation for Rob Pike's work? Not directly, if they did they would have written it themselves.
It's not unlike when the CEO of a multi-national thanks all the employees for their hard work at boosting the company's profits, with a letter you know was sent by secretaries that have no idea who you really are, while the news has stories of your CEO partying on his yacht from a massive bonus, and a number of your coworkers just got laid off.
If so, I wonder what his views are on Google and their active development of Google Gemini.
He should leave Google then.
Personally when I want to have this kind of reaction I try to first think it's really warranted or maybe there is something wrong with how I feel in that moment (not enough sleep, some personal problem, something else lurking on my mind...)
Anger is a feeling best reserved for important things, else it loses its meaning.
Yes, everyone supports capitalism this way or the other (unless they are dead or in jail). This doesn't mean they can't criticise (aspects of) capitalism.
What is a workable definition of "evil"?
How about this:
Intentionally and knowingly destroying the lives of other people for no other purpose than furthering one's own goals, such as accumulating wealth, fame, power, or security.
There are people in the tech space, specifically in the current round of AI deployment and hype, who fit this definition unfortunately and disturbingly well.
Another much darker sort of of evil could arise from a combination of depression or severe mental illness and monstrously huge narcissism. A person who is suffering profoundly might conclude that life is not worth the pain and the best alternative is to end it. They might further reason that human existence as a whole is an unending source of misery, and the "kindest" thing to do would be to extinguish humanity as a whole.
Some advocates of AI as "the next phase of evolution" seem to come close to this view or advocate it outright.
To such people it must be said plainly and forcefully:
You have NO RIGHT to make these kinds of decisions for other human beings.
Evolution and culture have created and configured many kinds of human brains, and many different experiences of human consciousness.
It is the height (or depth) of arrogance to project your own tortured mental experience onto other human beings and arrogate to yourself the prerogative to decide on their behalf whether their lives are worth living.
The agent got his email address from a .patch on GitHub and then used computer use automation to compose and send the email via the Gmail web UI.
https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.
I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
I've been pondering that given what the inputs are, llms should really be public domain. I don't necessarily mean legally, I know about transformative works and all that stuff. I'm thinking more on an ethical level.
I think distinguished engineers have more reason than most to be angry as well.
And Pike especially has every right to be angry at being associated with such a stupid idea.
Pike himself isn't in a position to, but I hope the angry eggheads among us start turning their anger towards working to reduce the problems with the technology, because it's not going anywhere.
I for once enjoy that so much money is pumped into the automation of interactive theorem proving. Didn't think that anyone would build whole data centers for this! ;-)
Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.
Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
>Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society,
The problem in my view is the spending trillions. When it was researchers and a few AI services people paid for that was fine but the bubble economics are iffy.
Hard to trust commenters are real these days. ( I am tho don’t worry )
None of this AI stuff is helpful for a flourishing society. It’s plagiarism and spam and flattery and disassociation and lies
All that is solid melts into air, all that is holy is profaned
It is always the eternal tomorrow with AI.
That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.
But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).
Also, almost by definition, every incentive is aligned for people in charge to deny this.
I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.
Do you have any evidence that an LLM created something massive?
Bet you feel silly now!
But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".
Am I wrong on this? Or if this "doesn't count", why?
I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.
I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".
Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.
> But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did
Other people see that kind of statement for what it is and don't buy any of it.
ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.
I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.
Your phrasing seems overly pessimistic and premature.
The sensible ones do.
> nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).
not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...
I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.
But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.
I don't, and the fact you do hints to what's wrong with the world.
And guys don't forget that nobody created one off internal tools before GPT.
i might open source one of those i wrote, sooner or later. it's a simple bridge/connector thingy to make it easier for two different systems to work together and many internal users are loving it. this one in particular might be useful to people outside my current employer.
> And guys don't forget that nobody created one off internal tools before GPT.
moot point. i did this kind of one-off developments before chatgpt as well, but it was much slower work. the example from above took me a couple of afternoons, from idea to deployment.
ChatGPT?
The link in the first submission can be changed if needed, and the flamewar detector turned off, surely? [dupe]?
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
I'm not sure that Kant's categorical imperative accurately summarizes my own personal feelings, but it's a useful exercise to apply it to different scenarios. So let's apply it to this one. In this case, a nonprofit thought it was acceptable to use AI to send emails thanking various prominent people for their contributions to society. So let's imagine this becomes a universal law: Every nonprofit in the world starts doing this to prominent people, maybe prominent people in the line of work of the nonprofit. The end result is that people of the likes of Rob Pike would receive thousands of unsolicited emails like this. We could even take this a step further and say that if it's okay for nonprofits to do this, surely it should be okay for any random member of the population to do this. So now people like Rob Pike get around a billion emails. They've effectively been mailbombed and their mailbox is no longer usable.
My point is, why is it that this nonprofit thinks they have a right to do this, whereas if around 1 billion people did exactly what they were doing, it would be a disaster?
> I'm not sure that Kant's categorical imperative accurately summarizes my own personal feelings, but it's a useful exercise to apply it to different scenarios.
The exercise I did is useful in part because I don't even think it's that unrealistic. We can't all sleep in your bed, and we all don't want to send notable people emails using AI, but it's not hard to imagine a future where our inboxes are flooded with AI spam like this. It's already happening. Look at what goes on with job postings. Someone posts a job posting which says to apply by sending an email to a certain email address. The email address gets thousands of emails of job applications, but most of them are AI bullshit. Then the person that posted the job uses AI to try to filter out the bullshit ones. Maybe the protocol in this case usually isn't SMTP and it's happening via other means, but my point stands. This is just spam.
The anti AI hysteria is absurd.
I've never been able to get the whole idea that the code is being 'stolen' by these models, though, since from my perspective at least, it is just like getting someone to read loads of code and learn to code in that way.
The harm AI is doing to the planet is done by many other things too. Things that don't have to harm the planet. The fact our energy isn't all renewable is a failing of our society and a result of greed from oil companies. We could easily have the infrastructure to sustainably support this increase in energy demand, but that's less profitable for the oil companies. This doesn't detract from the fact that AI's energy consumption is harming the planet, but at least it can be accounted for by building nuclear reactors for example, which (I may just be falling for marketing here) lots of AI companies are doing.
I thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):
I can't help but think Pike somewhat contributed to this pillaging.
[0] (2012) https://usesthis.com/interviews/rob.pike/
> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing.
Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.
I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
Factories have made mass production possible, but there are still tons of humans in there pushing parts through sewing machines by hand.
Industrial automation for non uniform shapes and fiddly bits is expensive, much cheaper to just offshore the factory and hire desperately poor locals to act like robots.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
Algorithms can not be copyrighted. Text can be copyrighted, but reading publicly available text and then learning from it and writing your own text is just simply not the sort of transformation that copyright reserves to the author.
Now, sometimes LLMs do quote GPL sources verbatim (if they're trained wrong). You can prove this with a simple text comparison, same as any other copyright violation.
(fwiw, I do agree gpl is better as it would stop what’s happening with Android becoming slowly proprietary etc but I don’t think it helps vs ai)
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
I don't know what Internet sites you visit, but people absolutely, 100% complain about alfalfa farmers online, especially in regards to their water usage in CA.
By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.
And also, I didn't claim alfalfa farming to be raping the planet or blowing up society. Nor did I say fuck you to all of the alfalfa farmers.
I should be (and I am) more concerned with the Ukrainian war than alfalfa. That is very reasonable logic.
https://bsky.app/profile/robpike.io
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
It's the latter. You can use an app view that ignores this: https://anartia.kelinci.net/robpike.io
While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.
https://theaidigest.org/village
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
- Anders Hejlsberg
- Guido van Rossum
- Rob Pike
- Ken Thompson
- Brian Kernighan
- James Gosling
- Bjarne Stroustrup
- Donald Knuth
- Vint Cerf
- Larry Wall
- Leslie Lamport
- Alan Kay
- Butler Lampson
- Barbara Liskov
- Tony Hoare
- Robert Tarjan
- John HopcroftEllul and Uncle Ted were always right, glad that people deep inside the industry are slowly but surely also becoming aware of that.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
AI is not considered to be a net positive by even close to 100% of people that encounter it. It's definitely not essential. So its impact is going to be heavily scrutinized.
Personally, I'm kind of glad to see someone of Rob Pike's stature NOT take a nuanced take on it. I think there's a lot of heavy emotion about this topic that gets buried in people trying to sound measured. This stuff IS making people angry and concerned, and those concerns are very valid, and with the amount of hype I think there needs to be voices that are emphatically saying that some of this is unacceptable.
have you considered the possibility that it is your position that's incorrect?
Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.
AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.
I see no facts in your comment, only rhetoric
> AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it?
and it may also burn the planet, reduce the entire internet to spam, crash the economy (taking with it hundreds of millions of peoples retirements), destroy the middle class, create a new class of neo-feudal lords, and then kill all of us
to accept this path because of some ideological love of a technology because of a possible (but unlikely) future promise of a technology, that today is mostly doing damage, is so moronic, isn't it?
The Greek philosophers were much more outspoken than we are now.
I notice people often use the "aesthetic of intelligence" to mask bad arguments. Just because we have good formatting, spelling, and grammar with citations and sources -doesnt mean the argument is correct.
Sometimes people get mad, sometimes they crash out. I would rather live in the world with a bunch of emotional humans, than in some AI powered skynet world.
I already took issue with the tech ecosystem due to distortions and centralization resulting from the design of the fiat monetary system. This issue has bugged me for over a decade. I was taken for a fool by the cryptocurrency movement which offered false hope and soon became corrupted by the same people who made me want to escape the fiat system to begin with...
Then I felt betrayed as a developer having contributed open source code for free for 'persons' to use and distribute... Now facing the prospect that the powers-that-be will claim that LLMs are entitled to my code because they are persons? Like corporations are persons? I never agreed to that either!
And now my work and that of my peers has been mercilessly weaponized back against us. And then there's the issue with OpenAI being turned into a for-profit... Then there was the issue of all the circular deals with huge sums of money going around in circles between OpenAI, NVIDIA, Oracle... And then OpenAI asking for government bailouts.
It's just all looking terrible when you consider everything together. Feels like a constant cycle of betrayal followed by gaslighting... Layer upon layer. It all feels unhinged and lawless.
this is how I feel too
as a species we were starting to make progress on environmental issues, they were getting to the point they were looking solvable
then "AI" appears, the accelerationist/inevitablist religious idea is born, and all the efforts go out the window to rape the planet to produce as many powered-on GPUs as possible
and for what?
to generate millions of shrimp jesus pictures and spongebob squarepants police chase videos
it's really quite upsetting
meanwhile the collaborators are selling out all present and future living beings on earth for a chance to appear on stage in an openai product announcement
whilst gas-lighting themselves into thinking they're doing good
This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
Nvidia, ASML, and most tech companies want to sell their products to China. Politicians are the ones blocking it. Whether there's a future for US tech is another debate.
The Arabs have a lot of money to invest, don't worry about that :)
It's not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.
"Philosophers" like my brother in law or you mean respected philosophers?
Tech capitalists also make improvements to technology every year
it is.
>The nuclear arms race and proliferation were largely stopped.
1. the incumbents kept their nukes, kept improving them, kept expanding their arsenals.
2. multiple other states have developed nukes after the treaty and suffered no consequences for it.
3. tens of states can develop nukes in a very short time.
if anything, nuclear is a prime example of failure to put a genie back in the bottle.
They actually stopped improving them (test ban treaties) and stopped expanding their arsenals (various other treaties).
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
https://theaidigest.org/village?time=1766692330207
https://theaidigest.org/village?time=1766694391067
https://theaidigest.org/village?time=1766697636506
---
Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?
Why are the rationalists doing this?
This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.
Ha, wow that's low. Spam people and signal that as support of your work
wrong
>OCR
less accurate and efficient than existing solutions, only measures well against other LLMs
>tts, stt
worse
>language translation
maybe
>wrong
https://www.noaa.gov/news-release/noaa-deploys-new-generatio...
?
>>OCR
>less accurate and efficient than existing solutions, only measures well against other LLMs
Where did you hear that? On every benchmark that I've ever seen, VLM's are hilariously better than traditional OCR. Typically, the reason that language models are only compared to other language models on model cards for OCR and so on is precisely because VLM's are so much better than traditional OCR that it's not even worth comparing. Not to mention that those top of the line traditional OCR systems like AWS, Textract are themselves extremely slow and computationally expensive. Not to mention much more complex to maintain.
>>tts, stt
> worse
Literally the first and only usable speech-to-text system that I've gotten on my phone is explicitly based on a large language model. Not to mention stuff like Whisper, Whisper X, Parakeet, all of the state-of-the-art speech-to-text systems are large-language model based and are significantly faster and better than what we had before. Likewise for text-to-speech, you know, even Kokoro-82M is faster and better than what we had before, and again, it's based on the same technology.
We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.
They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.
You can't both take a Google salary and harp on about the societal impact of software.
Saying this as someone who likes rob pike and pretty much all of his work.
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
But I wanted to go out of my way to comment to agree with you wholeheartedly about your claims about the irredeemability of the "microblogging" format.
It is systemically structured to eschew nuance and encourage stupid hot takes that have no context or supporting documents.
Microblogging is such a terrible format in it's own right that it's inherent stupidity and consistent ability to viralize the stupidest takes that will nevertheless be consumed whole by the entire self-selecting group that thinks 140 characters is a good idea is essential to the Russian disinfo strategy. They rely on it as a breeding ground for stupid takes that are still believable. Thousands of rank morons puke up the worst possible narratives that can be constructed, but inevitably, in the chaos of human interaction, one will somehow be sticky and get some traction, so then they use specific booster accounts to get that narrative trending, and like clockwork all the people who believe there is value to arguing things out of context 140 characters at a time eat it up.
Even people who make great, nuanced and persuasive content on other platforms struggle to do anything but regress to the local customs on Twitter and BS.
The only exception to this has been Jon Bois, who is vocally progressive and pro labor and welfare policy and often this opinion is made part of his wonderful pieces on sports history and journalism and statistics, but his Twitter and Bluesky posts are low context irreverent comedy and facetious sports comments.
The people who insisted Twitter was "good" or is now "good" have always just been overly online people, with poor media literacy and a stark lack of judgement or recognition of tradeoffs.
That dumbass russian person who insisted they had replicated the LK-99 "superconductor" and all the western labs failed because the soviets were best or whatever was constantly brought up here as how Twitter was so great at getting people information faster, when it actually was direct evidence of the gullibility of Twitter users who think microblogging is anything other than signal-free noise.
Here's a thing to think about: Which platform in your job gets you info that is more useful and accurate for long term thinking? Teams chats, emails, or the wiki page someone went out of their way to make?
> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
this is my position too, I regret every single piece of open source software I ever produced
and I will produce no more
The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI
it's not
the parasites can't train their shitty "AI" if they don't have anything to train it on
It will however reduce the positive impact your open source contributions have on the world to 0.
I don't understand the ethical framework for this decision at all.
There's also plenty of other open source contributors in the world.
> It will however reduce the positive impact your open source contributions have on the world to 0.
And it will reduce your negative impact through helping to train AI models to 0.
The value of your open source contributions to the ecosystem is roughly proportional to the value they provide to LLM makers as training data. Any argument you could make that one is negligible would also apply to the other, and vice versa.
if true, then the parasites can remove ALL code where the license requires attribution
oh, they won't? I wonder why
Not if most of it is machine generated. The machine would start eating its own shit. The nutrition it gets is from human-generated content.
> I don't understand the ethical framework for this decision at all.
The question is not one of ethics but that of incentives. People producing open source are incentivized in a certain way and it is abhorrent to them when that framework is violated. There needs to be a new license that explicitly forbids use for AI training. That may encourage folks to continue to contribute.
In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.
For your fire example, there's a difference between being Prometheus teaching humans to use fire compared to being a random villager who adds a twig to an existing campfire. I'd say the open source contributions example here is more the latter than the former.
"It barely changes the model" is an engineering claim. It does not imply "therefore it may be taken without consent or compensation" (an ethical claim) nor "there it has no meaningful impact on the contributor or their community" (moral claim).
I'm not surprised that you don't understand ethics.
I couldn't care less if their code was used to train AI - in fact I'd rather it wasn't since they don't want it to be used for that.
which is the exact opposite of improving the world
you can extrapolate to what I think of YOUR actions
My position on all of this is that the technology isn't going to uninvented and I very much doubt it will be legislated away, which means the best thing we can do is promote the positive uses and disincentivize the negative uses as much as possible.
IMHO their are going to be consequences of these negative effects, regardless of the positives.
Looking at it in this light, you might want to get out now, while you still can. Im sure its going to continue, its not going to be legislated away, but it's still wrong to be using this technology in the way it's being used right now, and I will not be associated with the harmful effects this technology is being used for because a few corporations feel justified in pushing evil on to the world wrapped positives.
they're using your exceptional reputation as a open-source developer to push their proprietary parasitic products and business models, with you thinking you're doing good
I don't mean to be rude, but I suspect "useful idiot" is probably the term they use to describe open source influencers in meetings discussing early access
my comments on the internet are now almost exclusively anti-"AI", and anti-bigtech
My point was that the hypothetical of "not contributing to any open source code" to the extent that LLMs had no code to train on, would not have made as big of an impact as that person thought, since a very large majority of the internet is text, not code.
this is precisely the idea
add into that the rise of vibe-coding, and that should help accelerate model collapse
everyone that cares about quality of software should immediately stop contributing to open source
I see this as doing so at scale and thus giving up on its inherent value is most definitely throwing the baby out with the bathwater.
All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?
I would never have imagined things turning out this way, and yet, here we are.
Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.
The open source movement has been exploited.
The exploited are in the wrong for not recognising they're going to be exploited?
A pretty twisted point of view, in my opinion.
Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?
Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.
And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.
The fact that they could litigate you into oblivion doesn't make it acceptable.
But for most open source licenses, that example would be within bounds. The grandparent comment objected to not respecting the license.
Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.
Because it is "transformative" and therefore "fair" use.
As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.
Anyone can use your software! Some of them are very likely bad people who will misuse it to do bad things, but you don't have any control over it. Giving up control is how it works. It's how it's always worked, but often people don't understand the consequences.
no, it hasn't. Open source software, like any open and cooperative culture, existed on a bedrock, what we used to call norms when we still had some in our societies and people acted not always but at least most of the time in good faith. Hacker culture (word's in the name of this website) which underpinned so much of it, had many unwritten rules that people respected even in companies when there were still enough people in charge who shared at least some of the values.
Now it isn't just an exception but the rule that people will use what you write in the most abhorrent, greedy and stupid ways and it does look like the only way out is some Neal Stephenson Anathem-esque digital version of a monastery.
If you care about what people do with your code, you should put it in the license. To the extent that unwritten norms exist, it's unfair to expect strangers in different parts of the world to know what they are, and it's likely unenforceable.
This recently came up for the GPLv2 license, where Linus Torvalds and the Software Freedom Conservancy disagree about how it should be interpreted, and there's apparently a judge that agrees with Linus:
https://mastodon.social/@torvalds@social.kernel.org/11577678...
But you can be sure that even the risk-adverse companies are going to go by what the license says, rather than "community norms."
Other companies are more careless.
As they say, "reduce, reuse, recycle." Your words are getting composted.
Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.
It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.
Now i use it to write more code.
I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.
But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.
And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.
At the same time, we all know they're not going anywhere, they're here to stay.
I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.
Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.
I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code. If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there. Has there ever been some system similar to something like that that one could take inspiration from?
"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.
which they don't
and no self-serving sophistry about "it's transformative fair use" counts as respecting the license
Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.
For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.
https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
I'm not particularly sympathetic to voices on HN that attempt to remove all nuance from this discussion. It's challenging enough topic as is.
thankfully, I don't live under the US regime
there is no concept of fair use in my country
What a joke. Sorry, but no. I don't think is unserious at all. What's unserious is saying this.
> and the training material was obtained legally
And assuming everyone should take it at face value. I hope you understand that going on a tech forum and telling people they aren't being nuanced because a Judge in Alabama that can barely unlock their phone weighed in on a massively novel technology with global implications, yes, reads deeply unserious. We're aware the U.S. legal system is a failure and the rest of the world suffers for it. Even your President routinely steals music for campaign events, and stole code for Truth Social. Your copyright is a joke that's only there to serve the fattest wallets.
These judges are not elected, they are appointed by people whose pockets are lined by these very corporations. They don't serve us, they are here to retrofit the law to make illegal things corporations do, legal. What you wrote is thought terminating.
Nah, don't do that. Produce shitloads of it using the very same LLM tools that ripped you off, but license it under the GPL.
If they're going to thief GPL software, least we can do is thief it back.
Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.
Thanks for your contributions so far but this won't change anything.
If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.
I fixed it... Sorry, I had to, the quote template was simply too good.
Yes.
I don't see how "We couldn't do this cool thing if we didn't throw away ethics!" is a reasonable argument. That is a hell of a thing to write out.
I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.
In other words, I don't need programming to remain mainstream, for it to continue fulfilling me and sustaining me.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it.
An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.
Then, ask what's different this time.
Some of them said that TV was making us mindless. Some of them said that electronic communication was depersonalizing. Some of them said that social media was algorithms feeding us anything that would make us keep clicking.
They weren't entirely wrong.
AI may be a very useful tool. (TV is. Electronic communication is. Social media is.) But what it does to us may not be all positive.
Most of the people who are protesting AI now were dead silent when Big Social Media was ramping up. There were exceptions (Cliff Stoll comes to mind) but in general, antitechnology movements don't have any predictive power. Tools that we were told would rob us of our personal autonomy and keep the means of production permanently out of our reach have generally had the opposite effect.
This will be true of AI as well, I believe... but only as long as the models remain accessible to everyone.
“But where the danger is, also grows the saving power.”
Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.
There's certainly great wealth for ~1000 billionaires, but where I am nobody I know has healthcare, or owns a house for example.
If your argument is that we could be poorer, that's not really productive or useful for people that are struggling now.
I'm sure he doesn't.
> The value proposition of software engineering is completely different past later half of 2025
I'm sure it's not.
> Can't really fault him for having this feeling.
That feeling is coupled with real, factual observations. Unlike your comment.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
Yes, but informedly choosing your slavedriver still has merit.
> Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
This is an interesting thought!
I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.
Not to mention, this is a tweet. He wasn't writing a long form text. It's ridiculous that you jumped the gun and got "disappointed" for the cheapest form of communication some random idiot did to someone as important as him.
And not to mention, I AM YET to see A SINGLE DAMN MIT License text or BSD-2/3 license text they should have posted if these LLMs respected OSS licenses and it's code. So as someone who's life's work dragged through the mud only to send a cheap email using the said tech which abused your code... It's absolutely a worthy response IMO.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software
Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!
The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.
What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.
The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.
I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.
I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.
That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.
End of the day your current prosperity is made by advances in energy and technology. It would be disingenuous to deny that and to deny the freedom of others to progress in their field of study.
You mean, we should all drive, oh I don't know, Electric powered cars?
Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.
An example I use to talk about hidden edge cases:
Imagine we have this (pseudo)code
fn doSomething(num : int) {
if num % 2 == 0 {
return Math.sqrt(num)
} else {
return Math.pow(num, 2)
}
}
Someone might see this function, and unit test it based on the if statement like: assert(doSomething(4) == 2)
assert(doSomething(3) == 9)
These tests pass, it’s merged.Except there’s a bug in this; what if you pass in a negative even number?
Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.
Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.
It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.
The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
Maybe evolution will select autistic humans as the fittest to survive living with AI, because the ones who find that email enraging will blow their brains out, out of frustration...
I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.
1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".
2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.
3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.
4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.
5. And then there are the impacts to society:
5a. OpenAI just made every computer for the next couple of years significantly more expensive.
5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.
5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).
5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?
5e. Astroturfing is becoming significantly cheaper and widespread.
/signed as I also write software, as I assume almost everyone on this forum does.
I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?
You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.
But no one is safe. Soon the AI will be better at CEOing.
Elon is way ahead, he did it with mere meatbags.
That is pretty much the only metric that matters in the end.
But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)
That correction is what's affecting salaries (and "power"), not AI.
/signed someone actually interested in AI and SWE
Until then "Computer says No"
The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.
Nobody is really safe.
Hence, I'm heavily invested in compute and energy stocks. At the end of the day, the person who has more compute and energy will win.
Everybody in the company envy the developers and they respect they get especially the sales people.
The golden era of devs as kings started crumbling.
"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.
It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.
Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.
With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.
At least in my company, none of this has actually increased revenue.
So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.
Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
Of course we do. We don't live inside some game theoretic fever dream.
if anything the Chinese approach looks more responsible that that of the current US regime
I don't think either of those are particularly valuable to the society I'd like to see us build.
We're already incredibly dialed in and efficient at killing people. I don't think society at large reaps the benefits if we get even better at it.
First to total surveillance state? Because that is a major driving force in China: to get automated control of its own population.
Give me more money now.
e.g. replacing logical syntax like "int x" with "var x int", which is much more difficult to process by both machine and human and offers no benefits whatsoever.
For example, in C++ because the type must come first, you have to use "auto" - this isn't necessary in langs who put the type after the variable name.
It also helps avoid ambiguous parsing, because int x; conflicts with some other language constructs.
My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.