In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.
There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.
It’s sad to see Ars Technica at this level.
Ars did own up to its mistake both in writing and in firing the author. The author himself fell on his sword in detail on Bluesky.
Your only real complaint is that their published explanation wasn't subjectively good enough for you and that means it's sad to see them at this level?
Not exactly. He wrote a long excuse blaming being sick, sidestepping the issue that he was using AI tools to write for him and not making an effort to fact check.
Also Bluesky is not Ars Technica. It doesn’t matter what he posts on his own obscure social media page. We’re talking about the journalistic platform where he was given a wide audience.
> Your only real complaint is that their published explanation wasn't subjectively good enough for you and that means it's sad to see them at this level?
Why do you not think that’s a valid complaint? It appears they eventually did part ways, but Ars Technica has also been trying to lay as low as possible and avoid the topic in hopes that it will blow over.
Honestly it seems like journalism has been in their 'vibe code' era for a decade where they just publish whatever typos and all.
This was an institutional error, not an individual reporter's fault. We should also be asking why he was still contributing when he had a high fever. Why did his editors push him to publish his work? I will certainly write code and answer questions when I am sick when I am up to it but I would never push to main while sick.
----------
A reporter whose bailiwick is AI should have known that he needed to check any quotes an LLM spat out. The editorial staff should have been checking too, and this absolutely is representative of their standards if they weren't.
It would probably be worth checking to see if any other articles or employees have similarly disappeared.
There was such a thing, in newspapers up until 2000. Then, as profits nosedived, these sorts of things largely disappeared.
Purely online entities have no way to pay for real editorial staff.
News has no money, compared to news of old. It's part of the reason 99% of modern news is just reporting other people's tweets or whatever.
I can't imagine many news companies having much money for court battles (to force disclosure of documents, or force declassification, or fighting to protect sources). Or spending months or years investigating a story.
Our news sources are poor, weak now.
> There was such a thing, in newspapers up until 2000. Then, as profits nosedived, these sorts of things largely disappeared.
In a lot of ways you're right, but our public radio station (cpr.org) has the largest newsroom in the state, and that newsroom makes up over a third of our staff. So yeah "news companies" don't have news rooms but that's because their business isn't news. It's funneling user data to their parent companies and getting people to click ads.
However, thanks to "listeners [and viewers, and surfers] like you" public media is still working its ass off to make a difference despite being cut lose from the government. It won't work unless you switch your perspective to local news (where most of the real information is anyway) and unless you donate.
Apologies for turning a comment into a mini fund-drive :)
Granted, a few of the remaining newspapers I'm aware of run business awards (Best restaurant, etc), and the way to win is via wining and dining them, even though the paper claims it's based on people's votes.
That style of thinking - of entitlement - probably brought the lack of interest in both cable news and traditional web/paper outlets - as the younger generations started to see through it more.
Tech audiences are the worst to be advertisement dependent on.
Is that how it works where you are? Because over here, the best way to win an award from a publication is to advertise in that publication. Advertise enough, and you'll also become their go-to when they need a quote about anything vaguely related to your restaurant or other business, and once a year or so they'll print some hagiographic article about the amazing things going on under your leadership.
The money (from advertising) that used to go to news now goes elsewhere (Google and Meta).
It’s left very little in terms of resources for staff.
Think about what the quality of commercial software would be like if there wasn’t enough money for QA and testers and top tier devs capped out at $180k with starting roles at 30k and 40k.
That’s the news industry right now. Poorer quality product.
It literally is journal-ism.
Wikipedia: "Journalism is the production and distribution of reports on the interaction of events, facts, ideas, and people that are the "news of the day""
Britannica: "Journalism, the collection, preparation, and distribution of news and related commentary"
Stories from British Newspaper Archive[1]:
- June 1950 Cat in Tree in Sheffield - Sheffield Daily Telegraph
- July 1939 A cat which has sought refuge the top of a tree on Somerlayton Road, Stockwell, defied all attempts to get it down. - Sunderland Daily Echo.
- June 1956 A cat was rescued from a 60ft. oak tree by Southgate firemen at Abbotshall Avenue, Southgate. - Wood Green weekly herald.
- Ocober 1959 CAT UP TREE I was sorry to hear that your cat had been lost Frances, I hope he is none the worse for his experience up the tree, now. - Penrith Observer.
- July 1956 Cat in tree rescued. Worthing firemen rescued a cat - Worthing Herald.
- July 1955 RESCUED CAT IN TREE - Percy Kemp climbed 40ft up a tree to rescue a cat - Bradford Observer.
- November 1956 An emergency tender from the Eastbourne Fire Brigade went to the rescue of cat in a tree in Brassey-avenue, Hampden Park - Eastbourne Gazette.
- August 1953 Clifford Morton (25) climbed 120ft up a swaying fir tree to rescue a cat - Coventry Evening Telegraph.
- March 1950 Persian cat belonging to Mrs M. ___ ... heard meow-ing from a 40ft. tree in field nearby - Dundee Evening Telegraph.
- February 1950 CAT UP TREE A telescopic ladder. belonging to Birkenhead Fire Service was rushed three miles to Arrowe Park Road. Woodchurch. this afternoon. to rescue a cat which had climbed over 40 feet up a tree - Liverpool Echo
- October 1924 SHOTS AT CAT IN TREE .. It was stated that the boys saw a black Persian 'cat up a tree on the farm, and they fired at it - Daily Mirror
- July 1939 CAT IN TREE FOR TWO DAYS - Harlepool Northern Daily Mail
- August 1962 CAT IN TREE RESCUED BY FIREMEN - Lincolnshire Free PRess
- May 1956 The story of a stray cat, Mr. Budd and a 45ft, fir tree, was told at Wednesday's annual meeting of the Torquay and South-East Devon branch of the R.S.P.C.A. - Torquay Times
- etc. etc.
When was this imaginary wonderful time you're implying when newspapers were only speaking truth to power with mighty investigative reporting, and not literally a journal of things people did and said in a local area (or on a certain topic)?
[1] https://www.britishnewspaperarchive.co.uk/search/results?bas... tree&retrievecountrycounts=false
I can't just submit shit work all day long then blame QA when some of it goes through. That's like a burglar saying it's the cops fault that people got burglared.
They did report on the article quote sourcing debacle at the time - perhaps not as quickly as some would’ve liked, but within a couple of days.
It stays as a mark, immortalizing the error, but it's a better scar than deleting and acting like it never happened.
I also want to note that, this last incident response is not typical of the Ars I'm used to.
They never really announced Peter Bright leaving ArsTechnica either though. At least not until much much later.
I'm not a US citizen and IANAL, so YMMV.
It seems entirely normal and standard to retract articles and publish a note elsewhere that it was retracted. In fact, it's common because if an article had one fabrication it might have others which you haven't discovered yet, so you don't want to keep it up.
Whether they want to announce that the journalist was fired is up to their discretion. But it's not necessary or even normal.
I don't know why you're talking about a "mark", a "scar", that "immortalizes". That's weird and frankly a little disturbing. The journalist got fired and the article got taken down and a note was made by the editor. That's accountability working as intended. I don't know why you want more than that.
Second, as a reader following Ars for more than 10 (15?, IDK) years, I never seen them abruptly retract an article like this. Their modus operandi is correct and own the corrections. This is what I always said (this is the third time in a comment train).
We all have scars. From a fall, from a cut, physical, emotional, whatnot. You don't need to feel sad, or get disturbed about it. A scar is a life's way of making you remember something. If it's your own making, it makes you remember what not to do. If it's someone else's making, it's makes you remember an unfortunate event you made out alive.
Owning your mistakes by correcting an article and marking it is greater accountability than saying "this has never happened, nothing to see here, move along". I'll not comment further on firing of the author. I don't have enough information on any side, or I don't know them close enough to say anything further than I wish he didn't get fired.
Of course, if someone leaves because of personal reasons or jumping ship, there is no reason to do that. But this is different.
Aside, posting about a new hire is easy and has no legal livability. Posting on a departure can be a tangled web.
I do agree that some note by Ars would be good here.
https://www.bbc.co.uk/news/articles/cly51dzw86wo
I think they're an outlier, but still I was disappointed by Ars's response. They deleted the article and didn't detail what was wrong with it at all. Felt like a cover-up.
https://www.bbc.co.uk/news/live/cp34d5ly76lt
(edit: technically, it was Panorama. I'm not sure if that is part of the News remit or separate from it).
This was a big disappointment. I read the original article and the comment from the source highlighting the error, knew what was wrong with it, and still think it was the wrong move to just delete the article and all the original comments, and replace it with an editorial note.
This is a kind of cover-up. It's impossible to hide the issue but they went to great lengths to soften the optics and remove the damning content from the public record. They obscured the magnitude of the error. It looks like another "person uses AI and gets some details wrong".
What they did so far, the decisions that allowed the issue to occur in the first place (e.g. no editorial review before publishing) and the first reaction to deal with the incident (just destroy the content, article and comments) is everything I need to know about the journalistic principles at ArsTechnica. it's a major loss of trust for me.
I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.
They’ve had this problem for years. They will publish anything that gets them clicks. They do not care if a writer makes things up. They do not care if their headlines are misleading - in fact, that’s the point. They clearly got into the job in order to influence and manipulate people.
They’re bad people, with terrible motivations, and unchecked power. They only walk back when something really really bad happens.
Never trust an Ars headline.
It's not just Ars Technica. I would go as far as saying the big majority. I work at the biggest alliance of public service media in EU, and my role required me to interact with editors. I often do not like painting with broad brush, but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude. Probably the "public" aspect of the media, but I woupd argue it's editorial aspect too. The rest of the staff are often very nice and down to earth.
They're like "UX experts" in software. One does UX for software, the other does UX for text. Same attitude problems, from the way you describe it. If the expert in something so subjectively judged is seen to be conceding anything, that might undermine their perceived expertise. Any push back is interpreted as somebody challenging their career.
I mean, yes, this happens quite a bit, especially with egotistical people.
But to play devils advocate they do have to deal with a massive fuckload of bullshit asymmetry where people dumber than rocks spew forth a never ending stream of stupid crap with the authority of an LLM.
My charitable read is that if one has to interact with the public, one naturally develops an understanding of what is wrong with it.
Always? Or since they were bought by Conde Nast in 2008?
I believe they are doing A/B testing on these.
Ah yes, I remember correctly for once: https://arstechnica.com/civis/threads/why-do-front-page-arti...
TL;DR: They are doing mandatory A/B testing since 2015.
Because it pays the bills, unfortunately. Google has sucked up all the advertising dollars that used to pay for media and the rest of the world is now doing card tricks to earn scraps to pay the bills.
I have a modicum of experience here. I write for another online media company and, although we produce our own headlines, we are 'strongly encouraged' to write clickbait headlines, to the extent where we are asked to remove instances of specific product names (etc.) in order to be mysterious and not give the game away too early. (Yes, in case it wasn't clear, I hate this!)
That's a very "shoot the messenger" statement. While Aurich is the community "face" of Ars, I very much doubt he has the power to do anything like that.
Exactly! The situation happened, no going back, but they had a choice - to be transparent about it and I am sure people would be appreciative of it, maybe giving them net positive rather than negative, but the choice they have made is a complete opposite and a sign that no one should trust them.
Ars can, and probably should if they have not already, publish a piece about hallucinations and use of AI in journalism, and own up to their own lack of appropriate controls and reflections. They do not need to drag the authors name into the write up. It can be self critical of themselves as a journalistic outlet.
Ars could have just said "After investigation, we reviewed our editorial process. The author of the article is no longer with the company." factually and objectively.
I can't see how this could possibly be a negative or harmful thing.
A retraction is totally different. It means that an editorial team does not trust any of the underlying article. It’s the biggest stick in journalism and is only reserved for the absolute worst breaches of trust.
When you retract an article and then update the author’s bio to past tense, that’s as clear of a signal as you can ethically send. A publication with clout makes news and writes the first line of people’s obituaries while they’re still alive - a degree of tact, professionalism and newsworthiness comes into play.
I don't think Ars thought they had a choice but to cut off the journalist who made the mistake, especially when it was regarding a very touchy subject. I don't think they had a choice, it's impossible for us readers to know if this was a single lapse of judgement or a bad habit. Regardless, the communication should have been better.
Their actions so far just make me think they're panicking and found a scapegoat to blame it on, but they're not going to put any new checks in place so it'll just happen again.
I feel bad for the guy, but there's just no way I can imagine much better safeguards other than editors paying more close attention to referencing sources, and hiring more reliable people.
More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the task.
Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.
IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.
(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)
It’s likely been used before but nobody got caught.
Reminds me of a story I was told as an intern deploying infra changes to prod for the first time. Some guy had accidentally caused hours of downtime, and was expecting to be fired, only for his boss to say "Those hours of downtime is the price we pay to train our staff (you) to be careful. If we fire you, we throw the investment out the window"
I've worked on teams with a rubber stamp review culture where you're seen as a problem if you "slow things down" too much with thorough review. I've also worked on teams that see value in correctness and rigor. I've never worked on a team where a reviewer is putting their job on the line every time they click "Approve". And culturally, I'm not sure I'd want to.
That said I think it's pretty clear we need mechanisms that better hold engineers to account for signing off on things they shouldn't have. In some engineering domains you can lose your license for this kind of thing, and I feel like we need some analogous structure for the profession of software engineering.
Making up quotes for article, with technology or not, should lead to firing.
... and, also, improved processes. There should be no way an individual writer can damage the brand to this extent with absolutely no checks or oversight. This was just an error, but a bad actor could've put something far, far worse out there.
Even an automated quote-checker might have helped in this case.
They had to do this. You have to have journalistic integrity above all.
Even worse,
> I have been sick in bed with a high fever and unable to reliably address it (still am sick) [0]
In an earlier HN thread, I saw someone ask why Ars was requiring staff work while ill. If that's true, if he posted without verification while sick and under pressure, which is implied and plausible, firing looks doubly bad.
Ars has lost a lot of my trust in recent years, with articles seeming far worse. Just like you, I'm sorry to see the editorial position here.
[0] https://bsky.app/profile/virtuistic.bsky.social/post/3mey2mq...
If the illness was genuine, can he document that he advised management of this fever and they told him to submit an article anyway? It's not his bosses job to stick a thermometer up his ass every morning.
I think they missed a big opportunity to instead of firing the guy sit him down and stress how not okay this was and that it harms the credibility and he needs to understand that and make a proper apology. They could make him do some education like ethical reporting responsibilities or whatever.
Then like you say not just hide the article but point out the mistakes and corrections. Describe the mistake and how credible reporting is their priority and the author will be given further education to avoid this happening again. They could also make new policies like going forward all articles that use AI for search results must attempt to find a source for that information. This would build trust not harm it in my opinion.
This is more like writing your buddy a prescription for drugs to take recreationally
This makes me question Ars not him. Loss of credibility indeed.
This was from a journalist _who_is_hired_as_expert_ at knowing of/about tooling that hallucinates (LLM ((AI)) chatbots). Decides to implicitly trust said technology to write a "hit piece" (lets be honest it was).
In several territories that would fall under slander and if is untrue is a major journalistic mis-step and career ending faux-pas.
Why in any situation would their position now be defendable?
This is akin to being a journalist of iron-mongering writing a "truth" piece on how "jet fuel can't melt steel beams" (if you don't get my reference here, lucky you). It's outright un-professional.
Blaming it on illness allows everyone to save face, but they were compos mentis enough to hit publish at the time. That itself carries a certain "I'm well enough to agree this is a good article" from said author.
Journalism has devolved into content creation in the literal sense of the word, they are just there to put something inside the div with the id "content", to justify the ads around it.
You just changed the meaning of journalist. Now sure, the job of some journalists could be better described as ad sellers, but I rather call those like that and restrict the original term to actual journalists who actually care about truth. Because they still exist.
It's as if we called "web devs" that learned JS on udemy and just vibe code, Computer Scientists and treated them as if they publish compiler research papers. It's just a completely different job
See also, in this very thread, somebody who thinks Berger has a strong pro-musk bias because his reporting and books say that SpaceX are good at what they do.
How can you know? I think you mean most reddit commentors are very reddit like (nowdays I tend to agree). I read Ars from time to time, but I never commented there. But still, when I read comments, I don't get the impression that Berger is close of getting fired.
They don't know; their whole comment is just empty insults about simpeltons. If anything should have the derision that "slop" gets, it should be the thousands of comments like that which hit HN every day.
Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.
Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.
Then, tech crunch wrote an article on our project.
I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
I thought that was rather strange, especially since we already had built up a relationship.
I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.
Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.
> I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
> I thought that was rather strange, especially since we already had built up a relationship.
The US mentality might be different, but at least having grown up and living in Germany, such an annoying hustler who wants to use some journalist as a marketing influencer for his private project is a huge no-no. In other words: it is a very reasonable decision (perhaps even the only right one) for any journalist to fob off such a hustler.
Yeah there seems to be a thing where in the US, what's seen as "selling yourself" or "putting your best foot forward" is considered excessive self-promotion / tall poppy behavior in other cultures.
Why is excessive self-promotion considered "putting your best foot forward"?
I understand that you need the money, so you do self-promotion. But this is clearly not "putting your best foot forward", but a "put a bad foot (annoy other people by excessive self-promotion) forward because you need the money", i.e. what many US-Americans do is by my understanding the opposite of this life advice which they give.
It sometimes happens that you spend weeks or months working on a story, only to be scooped by another publication. It sucks, especially if you think your story is the better one, but unless you can pivot or add a substantial amount of new insight, it won't come out.
In the event that you actually do end up emailing me, it's contingent on me actually checking my personal email, which I never do when I'm not working, and only sometimes do during work hours.
If it's you asking me a favor that I'm not in the mental space for, I'll mark the message unread as a reminder to get to it later.
Maybe I just have weird email habits, but I can get away with this because email is not a heavy part of my job.
That being said, one guy was pitching me on something several times a month for several months. I just recently responded to him and apologized because of x y z. He said don't worry and we had a fruitful conversation later.
So, follow through is important!
Passing on some life advice to anyone who’d benefit, people are busy. Maybe they didn’t respond because you’re annoying?… no no, feel it out and text again a while later. Give them another shot, get to the top of their inbox or messages again.
After someone told me that I realized it’s true!
This is a classic systems failure: you remove the safety mechanisms, add a new source of risk, and punish the individual operator. It's the same pattern you see in industrial accidents. The Swiss cheese model applies — every editorial layer that got cut was a slice of cheese being removed.
The more interesting policy question is whether publications should be required to disclose AI tool usage in their editorial process, similar to how financial publications disclose conflicts of interest. The FTC has signaled interest in AI-generated content transparency but hasn't issued concrete guidance for journalism yet.
The expectation is to produce more with much less (staff), the pipeline is heavily optimized for clicks, every single headline is A/B tested- Ars isn't alone in churning out poorly reviewed clickbait (and then not owning their mistakes)
IMO the industry is in crisis
Given the context, if you didn't intend for this to imply that Ars mandates LLM use, you should probably rewrite it.
If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.
That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.
I also dislike Dan Goodin’s reporting. He tries to talk the talk, but nearly every article he writes has some tell that he doesn’t really understand the thing he’s reporting on. Which is fine if he was relying on third-party expertise and quoting that, but he tries to make it sound like he has the expertise and it just comes up short. I feel like he’s a good example of that old fallacy that you think the news is correct about everything, until they report about something you know.
For me, Ashley Belanger is the best reporter they have. She might not have the subject matter expertise some of the others there claim, but she has the best journalism of anybody there. Lots of direct sources, well written, and the right level of depth. I honestly feel like I’m reading a different (and better) publication when I read her articles. More than once, I’ve had to scroll up to see if the article I’m reading was one of Ars’ licensed outside pieces, as the quality bar was higher than I’m used to, only to find her name.
Beth Mole is a close second. She has subject matter expertise, good journalism, and loves to slip in some humor or justified “get a load of this idiot” comments.
Gell-Mann Amnesia.
https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...
The description is slightly backwards. The problem is you continue to trust the news after seeing how wrong they are about something on which you’re an expert.
Elon himself is indeed questionable, but you really can't argue with his space-related achievements. Even other eccentric billionaires like Bezos haven't come close.
He's is careful not to opine on Musk's other dealings, which is fair. As someone who wants to know more about SpaceX, I don't want to read yet more about Tesla, or Twitter, or Trump, or Epstein.
Personally, one of the authors I most like to read on ArsTechnica (though he writes rarely nowadays).
CarTechnica though .. yuck. Also, Oulette reliably picks movies and TV shows I will absolutely hate, so I guess good S/N there?
Mole's coverage is great if you're into Cronenberg-but-in-real-life.
But all of those matter, and are not isolated. When the leader of an organisation is distracted by the other organisations they control, it matters. It also matters when they are repeatedly wrong about their predictions, even if on another organisation, because it helps you calibrate expectations.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
And have been done to death elsewhere.
Meanwhile, Berger produces balanced, informed, interesting, and informative coverage of space tech (in general, not just SpaceX).
I miss Maggie Koerth & Jon Stokes
Direct quotes vs. paraphrases is a bright line in journalism for a reason — it's verifiable and represents a commitment to accuracy. When AI blurs that line by generating plausible-sounding quotes, the journalist's responsibility to verify actually increases, not decreases.
This feels similar to how autocomplete and spell-check occasionally introduce errors that pass through because the output looks correct. The difference is the stakes — a wrong word in an email is trivial, a fabricated quote attributed to a real person is a serious ethical breach.
OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)
An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (82 comments)
Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (624 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (951 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)
No. We only get the dystopian AI features, not the useful ones.
It does include two facts:
1. That the reporter's bio on the webpage changed "...is a reporter at Ars" to "...was a reporter at Ars". On the one hand, that's pretty thin sauce. On the other hand, that's not exactly the sort of change that gets made randomly.
2. They reached out to the various people involved, and although nobody has confirmed it, it's also the case that nobody has denied it.
I really don't know where the internet is heading to and how any content site can survive.
I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.
Yeah, that's why I said I don't know where the internet is heading to.
Also generally wondering… Do labs view scraping as legally safer than trying to cache the Internet? I figure it’s easy to mark certain content as all but evergreen (can do a quick secondary check for possible new news).
Maybe caching everything is too expensive?
It says things I know to be false fairly regularly. I don't keep a log or anything, but it's left an impression that it's far from reliable.
You would know how?
The links contradict or do not support the overviews often in my experience.
Sometimes I use a completely meaningless combination of keywords by mistake, and AI Overview will happily make up a story telling me what I am looking for.
While trying to find an example by going back through my history though, the search "linux shebang argument splitting" comes back from the AI with:
> On Linux and most Unix-like systems, the shebang line (e.g., #!/bin/bash ...) does not perform argument splitting by default. The entire string after the interpreter path is passed as a single argument to the interpreter.
(that's correct) …followed by:
> To pass multiple arguments portably on modern systems, the env command with the -S (split string) option is the standard solution.
(`env -S` isn't portable. IDK if a subset of it is portable, or not. I tend to avoid it, as it is just too complex, but let's call "is portable" opinion.)
(edited out a bit about the splitting on Linux; I think I had a different output earlier saying it would split the args into "-S" and "the rest", but this one was fine.)
> Note: The -S option is a modern extension and may not be available
But this, … which is it.
I think content sites will need to rely on supporters (ala patreon or substack). It's shitty but it's what the internet has come to
Without the content site the AI overview will become useless
Of course Google gets little credit for this since it was their own malfeasance that led to all the SEO spam anyway (and the horrible expertsexchange-quality tech information, and stupid recipe sites that put life stories first)... but at least there now there is a backpressure against some of the spammy crap.
I am also convinced that the people here reporting that the overviews are always wrong are... basically lying? Or more likely applying some serious negative bias to the pattern they're reporting. The overviews are wrong sometimes, yes, but surely it is like 10% of the time, not always. Probably they're biased because they're generally mad at google, or AI being shoved in their face in general, and I get that... but you don't make the case against google/AI stronger by misrepresenting it; it is a stronger argument if it's accurate and resonates with everyones' experiences.
What good is it if the overviews lie some percentage of the time (your own guess is 10%) and you have to search to verify that they aren't making shit up anyway. Also, those SEO spam-ridden garbage sites google feeds you whenever you bother to look past the undependable AI summaries are mostly written by AI these days and prone to the same problem of lying which only makes fact checking google's auto-bullshitter even harder.
https://en.wikipedia.org/wiki/Availability_heuristic
No one remembers when AI Overview gets the answer right (it's expected to do so after all) but everyone has their favorite examples of "oh stupid AI."
Think about the urban legends in the style of "the average person eats X spiders per year." It's extremely unlikely that Rumor Patient Zero is in a position to realize it's wrong, or that they will inform the next person that it came from an LLM summary.
In fact, if you switch to "Pro" mode, it frequently says the complete opposite of what it claimed in "Fast" mode while still being ~10-20% wrong. (Not to say it's not useful — there's no better way to aggregate and synthesize obscure information — but it should definitely not be relied on a source of anything other than links for detailed followup.)
-Isaac Newton
Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.
Nothing suspicious about heavy use of qualifiers in a non-apology blanket denial. Where's the Polymarket for whether this guy has a job next month?
https://www.404media.co/ars-technica-pulls-article-with-ai-f...
That’s a problem. If he really hasn’t apologized, neither he nor Ars have recognized there is a problem, which means it will happen again.
When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.
You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.
He's on the byline and he's an editor.
> they don't each do the same research in order to fact-check each other. There is inherently a level of trust
If we're going to excuse this, what does the byline mean? He trusted the wrong person. It would be like if a source lied to him. Not the end of the world. But absolutely credibility destroying if instead of an apology you get a word salad.
> You can say this is a failure by the editorial process
Orland is also an editor. (Senior gaming editor [1].)
Well, Ars Technica is already for quite some time on my ignore list, and this further solidifies its place there.
If it's not true then the error is on him. But it seems plausibly bad to me as an outside observer of US employment and healthcare customs. And the precarity of journalism nowadays. It is a sad state of things, as in it could be more a systemic than individual failure.
When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?
I'm skeptical. I hate to be the one to say it, but I don't think this would have happened if he was using Claude 4.6 Opus.
The readers there are borderline militant about AI's more problematic uses. This could have gone only one way.
I still read and sub to Ars - as they are the least bad source of day-to-day technews - but quality is dropping.
If the content is human written and you check your sources there is no way for AI to “accidentally” seep in. Sure you can use an AI tool to find links to places you should check and you can then go and verify sources. That’s obviously not what happened.
This, right here. Coming from an "AI Expert", this is what we can expect the future to be. One AI isn't working? Let's ask the other AI why. I have no words for that reflex. It's beyond idiotic. It takes everything that's human about your reasoning and tosses it aside. What a dumb idea.
Imagine what he could have gotten up to with LLMs.
"When this thing blows there isn't going to be a magazine anymore!"
https://youtube.com/watch?v=oj79mp2WEx0Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".
I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.
Not anymore. Back in the day of print newspapers, a dozen people read an article before it was printed, including editorial staff, fact checkers, legal review, layouting and printers. If something slipped through – which was much rarer at the time – they'd also print a retraction.
Most of that stopped when newspapers and the blogosphere basically merged into one ad-funded business.
And fabricating quotes is pretty high up there in the list of things that journos should never, ever do.
Which should be a red flag in and of itself. You don't need to "push" people to "find uses" for genuinely useful tools.
Yeah, no.
“Everyone knows that Perl is designed to make easy things easy, and hard things possible, but nobody knows why it’s called Perl.”
Which of course returns 0 results on Google, as is customary for famous quotes.AI is not a tool and from the way things are going never will be. Humans are more tool-like in that sense. In this case the human was discarded, the AI remains.
This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.
That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.
Obviously, we were rocked by the DrPizza scandal years ago...and now this.
Sobering.
> It also comes at a moment in which many media bosses are pushing staff to find uses for AI — as are executives across most industries — even while clear guidelines around use of the technology that uphold editorial ethics remain elusive.
Anyone who's working with computers now knows this is true. We're being pushed relentlessly to use AI; in some cases (I've heard second hand) people are mandated to use AI, sometimes forbidden from crafting code manually, and are disciplined if they don't. Yet the guidelines are very unclear, as they must be since if we're honest we're all threading new ground.
This, being mandated to use AI at all costs yet given very brittle/unclear indications on how to use it, and these guidelines evolve weekly, and also we're all fearful of losing our jobs, makes for a recipe for disaster.
So yeah, this journalist should have called in sick and use better judgment when toying with AI tools, but still there's a wider problem and the responsibility for this craziness is also on the leadership of most companies and the investors pusing for this.
(None of this is an excuse for generating AI slop. I hate it and I don't need to be told any guidelines about not doing it. If you cannot be bothered to write the text, I cannot be bothered to read it.)
The editors are the ones ultimately responsible for what they publish. Yet they’re not taking responsibility.
The main comment I found relevant is probably this (There is more that he has written but I am pasting what I find relevant for my comment)
> I have been sick with Covid all week and missed Mon and Tues due to this, On friday, while working from bed with a fever and very little sleep. I unintentionally made a serious journalistic error in an article about Scott Shambaugh
... > I should have takena. sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words.
> Being sick and rushing to finish, I failed to verify the quotes in my outline against the original blog source before including them in my draft
The journalistic system has failed us so much that in the news cycle, we want things NOW. I think ars-technica post went viral on HN as well before the whole controversy and none were wiser until Sam commented about the false quotes.
It prefers views and to get views you have to do work now. There is no room left for someone being sick and I think that this sort of expands to every job at times.
And instead of AI being a productive tool, it can act as a noise generator. It writes enough noise that looks like signal and Tada, none are the wiser.
People think that using AI with an person is gonna make their work 10x more but what's gonna happen is the noise is raised 10x more and the work of finding signal from that noise is gonna increase 10x more (I am speaking about employment related projects, obv in personal projects this might not matter if it might have 10x noise or 100x noise if it can just do the thing you want it to do)
When AI systems are constrained, they can deny you your api request with marginal loss. But when Human people are constrained, they really can't deny your employee's request without taking massive losses at times (whole day leaves) and I have heard in some countries, sick days can be a joke. This could very well be cultural because sick days are well implemented in Europe compared to america (from what I hear)
I don't know about Benj but some reporters are really paid peanuts. Remember the pakistani newspaper which had pasted Chatgpt Verbatim with content like "“If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout perfect for maximum reader impact. Do you want me to do that next?." WITHIN the newspaper.
I believe that humans should be treated with more dignity so that they feel comfortable around taking sick leaves when they are sick... or just fixing this culture that we have of people chugging along in sick leaves.
Until then, AI is bound to be used, I don't think that this is gonna be a single incident, and AI will produce noise/spew random stuff. Imagine you are a journalist and you are sick and you feel like there's a magical tool which can do the job for you when you are sick. You use it and in the moments of sickness, you are in the IDGAF attitude and push the article to main.
I personally don't believe that this is gonna be a single incident with this whole story playing out like this at the very least.
If any Journalist is reading this, please take sick leaves when you are sick. Readers appreciate your writing and I hope you don't integrate AI tools into your workflow (a lot) that the work is started being done by AI in this case. Even without AI I feel like you guys might not be working at the best mental space and Readers are happy to wait if you add unique perspectives into the story, something I don't think is possible when you are sick. If any employer try to still pressure you, just share this message to them haha to tell your employer what the people want (and what brings them money long term).
I also hate how the culture has become of finding the article which came the fastest after an event happens because that would promote AI use more often than not and it to me feels like jackals coming out of nowhere to try to take whatever piece you can take out of a particular news and that to me doesn't feel soo great of look. (I know nothing about how such journalism works so sorry if I am wrong about anything, I usually am but these are just my opinions on the whole thing)
But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.
Besides, I am sure you could tell it was just a joke but needed to be pedantic for no reason other than feel smart?
"I'm an AI reporter. And, I'm an AI reporter."
A true "senior" AI reporter should be more skeptical of LLM output than anyone else.
Sorry, I never could resist a good dad joke
I despise Conde Nast
I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.
NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".
NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".
That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.
1. Believe LLMs outright even knowing they are frequently wrong
2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.
Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.
You may not owe your least favorite publications better, but you owe this community better if you're participating in it.
Sorry, I just searched my comment history, maybe I missed it? Was it recent?
You probably wish everyone would post as bots do, without em—dashes of course.
One of the things left unsaid in Edwards's apology [0] was whether he read the blog post that is the entire raison d'etre of his story. It's not like the story purported to do anything other than incorporate publish blog posts. So in his overworked and sickened state, how did trying out an "experimental Claude Code-based AI tool" substantially save him time versus jotting notes while ostensibly reading the source material himself
[0] https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
Making up quotes and attributing them to people has happened before AI, journalists proper and pretend have done it too.
- He didn't care for his story,
- he didn't care to verify his story,
- he published bullshit made up stuff,
- and put words in a real person's mouth
- and he didn't even care to write the thing himself
Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?
If they wanted stories from an LLM, they can pay for a subscription to one directly.
Hope this sends a message to journalist hacks who offload their writing or research to an LLM.