The consumption of AI-generated content at scale
20 points
7 days ago
| 11 comments
| sh-reya.com
| HN
krupan
4 hours ago
[-]
"Now, with LLM-generated content, it’s hard to even build mental models for what might go wrong, because there’s such a long tail of possible errors. An LLM-generated literature review might cite the right people but hallucinate the paper titles. Or the titles and venues might look right, but the authors are wrong."

This is insidious and if humans were doing it they would be fired and/or cancelled on the spot. Yet we continue to rave about how amazing LLMs are!

It's actually a complete reversal on self driving car AI. Humans crash cars and hurt people all the time. AI cars are already much safer drivers than humans. However, we all go nuts when a Waymo runs over a cat, but ignore the fact that humans do that on a daily basis!

Something is really broken in our collective morals and reasoning

reply
carbarjartar
3 hours ago
[-]
> AI cars are already much safer drivers than humans.

I feel this statement should come with a hefty caveat.

"But look at this statistic" you might retort, but I feel the statistics people pose are weighted heavily in the autonomous service's favor.

The frontrunner in autonomous taxis only runs in very specific cities for very specific reasons.

I avoid using them out of a feeble attempt to 'do my part', but I was recently talking to a friend and was surprised that they avoid using these autonomous services because they drive, what would be to a human driver, very strange routes.

I wondered if these unconventional, often longer, routes were also taken in order to stick to well trodden and predictable paths.

"X deaths/injuries per mile" is a useless metric when the autonomous vehicles only drive in specific places and conditions.

To get the true statistic you'd have to filter the human driver statistics to match the autonomous services' data. Things like weather, cities, number of and location of people in the vehicle, and even which streets.

These service providers could do this, they have the data, compute, and engineering to do so, though they are disincentivized to do so as long as everyone keeps parroting their marketing speak for them.

reply
colonCapitalDee
2 hours ago
[-]
I don't know why that matters? The city selection and routing is a part of the overall autonomous system. People get to where they need to be with fewer deaths and injuries, and that's what matters. I suppose you could normalize to "useful miles driven" to account for longer, safer routes, but even then the statistics are overwhelmingly clear that Waymo is at least an order of safer than human drivers, so a small tweak like that is barely going to matter.
reply
carbarjartar
1 hour ago
[-]
> so a small tweak like that

Well it would seem these autonomous driving service providers disagree with your claim that it is just a 'small tweak' considering they only operate under these specific conditions when it would be to their substantial benefit to instead operate everywhere and at all times.

reply
krupan
3 hours ago
[-]
Thank you for demonstrating my point about the insane amount of criticism we level at self driving AI
reply
carbarjartar
3 hours ago
[-]
You consider it "sane" to compare the citywide driving statistics of mid winter Buffalo New York with mid summer San Francisco California driving limited to only using Market and Van Ness streets?
reply
chemotaxis
4 hours ago
[-]
The best part is that this article is almost certainly AI-generated or heavily AI-assisted too.

Before people get angry with me... there's plenty of small tells, starting with section headings, a lot of linguistic choices, and low information density... but more importantly, the author openly says she writes using LLMs: https://www.sh-reya.com/blog/ai-writing/#how-i-write-with-ll...

reply
absoluteunit1
4 hours ago
[-]
Was thinking this as well.

Just skimming throught the first two paragraphs felt like I as reading a ChatGPT response. That and the fact that there's multiple em dashes in the intro alone.

reply
spoiler
3 hours ago
[-]
Tangentially related, but I'm low key miffed that em dashes get a bad rep now because of AI.

They're a great way to "inject" something into a sentence, similar to how people speak in person. I feel like my written style has now gotten worse because I have to dumb it down, or I'll be anxious any writing/linguistic flourish will be interpreted as gen AI

reply
FarmerPotato
2 hours ago
[-]
I learned em-dash from The Mac Is Not A Typewriter. From now on I'll keep the -- plain ASCII to hopefully avoid the backlash.
reply
phainopepla2
4 hours ago
[-]
I would think a decent LLM would know the difference between a metaphor and simile, unlike the author
reply
Havoc
3 hours ago
[-]
Noticing this most in visual content rather than LLMs. That era of anyone young and perpetually online can spot AI via uncanny valley was remarkably shortlived. [0]

>In the pre-LLM era, I could build mental models, rely on heuristics, or spot-check information strategically.

I wonder if this will be an enduring advantage of the current generation - building your formative world model in a pre-AI era. It seems plausible to me that anyone who built the foundations there has a much higher chance of having instincts that are more grounded even if post-AI experiences are layered on later

[0] https://www.reddit.com/r/antiai/comments/1p8z6y6/nano_banana...

reply
nancyminusone
2 hours ago
[-]
In my opinion, this is the biggest (current) problem with AI. It is really good at that thing you used to do when you had to hit a word count in a school essay. How long until the world's hard drive space is filled up with filler words and paragraphs of text that goes nowhere, and how could you possibly search and find anything in such conditions?
reply
furyofantares
4 hours ago
[-]
Scroll through and read only the section headers. I would be shocked if this wasn't at the very least run through an LLM itself. For sure the section headers are, I'll skip the rest unless someone posts that it's worth a read for some reason.

It doesn't appear to be section headings glued together with bullet lists so maybe the content really does retain the author's perspective but at this point I'd rather skip stuff I know has been run through an LLM and miss a few gems rather than get slopped daily.

reply
SunshineTheCat
4 hours ago
[-]
What's crazy is you're starting to see an overreaction to this fact as well.

The other day I posted a short showcasing some artwork I made for a TCG I'm in the process of creating.

Comments poured in saying it was "doomed to fail" because it was just "AI slop"

In the video itself I explained how I made them, in Adobe Illustrator (even showing some of the layers, elements, etc).

Next I'm actually posting a recording of me making a character from start to finish, a timelapse.

Will be interesting if I get any more "AI slop" comments, but it's becoming increasingly difficult to share anything drawn now because people immediately assume it's generated.

reply
raincole
35 minutes ago
[-]
I feel you, but people nowadays go extreme lengths to present AI-generated artworks as hand-drawn.

It's not even funny. You can google "asamiarts tracing over AI" and read the whole drama. They have not only timelapse, but real world footage as 'evidence.' And they are not the only case.

It's not the fight you can win. Either ignore the comments calling you AI or just use AI.

reply
phainopepla2
4 hours ago
[-]
I have seen this as well. Any nicely formatted medium to long text without obvious errors immediately comes under suspicion, even without the obvious tells
reply
p_l
4 hours ago
[-]
The people commenting about AI Slop, at least considerable portion, do so because it allows them to feel morally superior at little effort.

Do not expect them to retract or stop if there's a way to not see the making of :P

reply
nh23423fefe
3 hours ago
[-]
someone on the internet is wrong?
reply
nh23423fefe
3 hours ago
[-]
gpt is eternal september for normies
reply
tensegrist
4 hours ago
[-]
> There’s a frustration I can’t quite shake when consuming content now—

perhaps even a frustration you can't quite name

reply
pessimizer
3 hours ago
[-]
I'm pretty sure that the reason everything seems like AI is that AI produces stupid, pointless content at scale, and our "writers" have become people who generate stupid, pointless content at scale.

There's no reason for most things to have been written. Whatever point is being made is pointless. It's not really entertaining, it's meant to be identified with; it's not a call to any specific action; it doesn't create some new fertile interpretation of past events or ideas; it's not even a cry for help. It's just pointless fluff to surround advertising. From a high concept likely dictated by somebody's boss.

AI has no passion and no point. It is not trying to convince anyone of anything, because it does not care. If AI were trying to be convincing, it would try to conceal its own style. But it doesn't mean anything for an AI to try. It's just running through the motions of filling out an idea to a certain length. It's whatever the opposite of compression is.

A generation of writers raised on fanfiction and prestige tv who grew up to write Buzzfeed articles at the rate of five a day are indistinguishable from AI.

Why This Matters

reply
FarmerPotato
2 hours ago
[-]
God help us if we give the bag of words a reason to live for. It might try to be convincing.
reply
SpaceManNabs
3 hours ago
[-]
> If something seems off, I can just regenerate and hope the next version is better. But that’s not the same as actually checking. It feels like a slot machine—pull the lever again, see if you get a better result—substitutes for the slower, harder work of understanding whether the output is correct.

What a great point. In some work loops I feel like I get addicted to seeing what pops in the next generation.

One of the things i Learned from moderating internet usage is not fall prey to recommendation systems. As in, when I am on the web, I only consume what I explicitly looked for, and not what the algorithm thinks i should consume next.

sites like reddit and HN make this tricky.

reply
bryanrasmussen
7 days ago
[-]
yeah everything sounds like AI, and why is that? Well it might be because everything is AI but I think that writing style is more LinkedIn than LLM, the style of people who might get slapped down if they wrote something individual.

Much of the world has agreed to sound like machines.

Another thing I've noticed is that weird stuff that is perhaps off in some way, also gets accused of being LLMs because it doesn't feel right.

If you sound unique and weird you get accused of being a bad LLM that can't falsify humanity well enough, and if you sounds boring and bland and boosterist, you get accused of being a good LLM.

You can't write like no one else, but you also can't write like everybody else.

reply
FarmerPotato
1 hour ago
[-]
When I encounter this LLM generated Mad Lib:

"We embody <adjective> <noun> through <adjective> <noun>, <adjective> <noun>, and <adjective> <noun>. "

my uncanny warning blares--so I test if it becomes more intelligible with the adjectives stripped out. These padded-out pabulums are the tells.

I hope Elements of Style is rediscovered.

reply
1bpp
4 hours ago
[-]
Text feeling awkward or not flowing very well has ironically become a very strong signal for human-written text for me, and usually makes me pay more attention now
reply