But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.
But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.
Which is the real issue, we’re flooding channels not designed for such low effort submissions. AI slop is just SPAM in a different context.
But the critical point is that you need to stay in control. And a lot of people just delegate the entire process to an LLM: "here's a thought I had, write a blog post about it", "write a design doc for a system that does X", "write a book about how AI changed my life". And then they ship it and then outsource the process of making sense of the output and catching errors to others.
It also results in the creation of content that, frankly, shouldn't exist because it has no reason to exist. The number of online content that doesn't say anything at all has absolutely exploded in the past 2-3 years. Including a lot of LLM-generated think pieces about LLMs that grace the hallways of HN.
The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.
It’s the efficient popularization of the boring stuff. Not much else.
He lacks (or lost thru disuse) technical expertise on the subject, so he uses more and more fuzzy words, leaky analogies, buzzwords.
This maybe why AI generated content has so much success among leaders and politicians.
This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.
It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.
I would rather read the prompt than the generative output, even if it’s just disjointed words and sentence fragments.
don't be mean, it's median AI à la mode
https://youtu.be/605MhQdS7NE?si=IKMNuSU1c1uaVCDB&t=730
He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."
Yeah, AI could not come up with that phrase.
Agreed.
"AI" would never say "loins" (too sexual)
"AI" would never say "dogmatism" (encroaches on the "AI" provider's own marketing scheme)
If you like your prose to be anodyne, then maybe you like what AI produces.
I see it on recent blog posts, on news articles, obituaries, YT channels. Sometimes mixed with voice impersonation of famous physicists like Feynman or Susskind.
I find it genuinely soul-crushing and even depressing, but I may be over sensitive to it as most readers don't seem to notice.
Maybe I'm going crazy but I can smell it in the OP as well.
I've had AI be boring, but I've also seen things like original jokes that were legitimately funny. Maybe it's the prompts people use, it doesn't give it enough of a semantic and dialectic direction to not be generic. IRL, we look at a person and get a feel for them and the situation to determine those things.
I'm no fantasy author, and my prose leaves much to be desired. The stuff the LLM comes up with is so mind numbingly bland. I've given up on having it write descriptions of any characters or locations. I just use it for very general ideas and plot lines, and then come up with the rest of the details on the fly myself. The plot lines and ideas it comes up with are very generic and bland. I mainly do it just to save time, but I throw away 50% of the "ideas" because they make no sense or are really lame.
What i have found LLMs to be helpful with is writing up fun post-session recaps I share with the adventurers.
I recap in my own words what happened during the session, then have the LLM structure it into a "fun to read" narrative style. ChatGPT seems to prefer a Sanderson jokey tone, but I could probably tailor this.
Then I go through it, and tweak some of the boring / bland bits. The end result is really fun to read, and took 1/20th the time it would have taken me to write it all out myself. The LLM would have never been able to come up with the unique and fun story lines, but it is good at making an existing story have some narrative flare in a short amount of time.
It shocks me when proponents of AI writing for ideation aren't concerned with *Metaphoric Cleansing* and *Lexical Flattening* (to use two of the terms defined in the article)
Doesn't it concern you that the explanation of a concept by the AI may represent only a highly distorted caricature of the way that concept is actually understood by those who use it fluently?
Don't get me wrong, I think that LLMs are very useful as a sort of search engine for yet-unknown terms. But once you know *how* to talk about a concept (meaning you understand enough jargon to do traditional research), I find that I'm far better off tracking down books and human authored resources than I am trying to get the LLM to regurgitate its training data.
The more interesting cause I think: RLHF is the primary driver, not just the architecture. Fine-tuning is trained on human preference ratings where "clear," "safe," and "inoffensive" consistently win pairwise comparisons. That creates a training signal that literally penalizes distinctiveness - a model that says something surprising loses to one that says something expected. Successful RLHF concentrates probability mass toward the median preferred output, basically by definition.
Base models - before fine-tuning - are genuinely weirder. More likely to use unusual phrasing, make unexpected associative leaps, break register mid-paragraph. Semantic ablation isn't a side effect of the training process, it's the intended outcome of the objective.
Which makes the fix hard: you can't really prompt your way out of it once a model is heavily tuned. Temperature helps a little but the distribution is already skewed. Where we've gotten better results is routing "preserve the voice" tasks to less-tuned models, and saving the heavily RLHF'd models for structured extraction and classification where blandness is actually what you want.
Semantic ablation is also why I'm doubtful of everyone proclaiming that Opus 4 would be AGI if we just gave it the right agent harness and let all the agents run free on the web. In reality they would distill it to a meaningless homogeneous stew.
I'm so glad that you have given me the language to express this perspective.
For example the anthropic Frontend Design skill instructs:
"Typography: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font."
Or
"NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character." 1
Maybe sth similar would be possible for writing nuances.
1 https://github.com/anthropics/skills/blob/main/skills/fronte...
That's certainly a take. In the translation industry (the primogenitor and driver for much of the architecture and theory of LLMs) they're known for making extremely unconventional choices to such a degree that it actively degrades the quality of translation.
Im in no way anti-LLMs as I have benefited from them, but I believe the issue that will arise is that their unpredictable nature means that they can only be used in narrowly defined contexts. Safety and trust are paramount. Would you use online banking if the balance on your account randomly changed and was not reproducible? No chance.
This does not achieve the ROI that investors of these model producers are thinking. The question is whether said investors can sell off their shares before it becomes more widely known.
You put words to something that's been on my mind for a while!
LLMs are a tool for marketers or state departments who want to be create FUD on a moment's notice.
The obvious truth is that LLMs basically suck for writing code.
The real marketing scheme is the ability to silence and stifle that obvious truth.
The real danger is the future investment needed to explore other architectures beyond LLMs. Will private firms be able to get the investment? Will public firms be granted the permission to do another round of large capex by investors? As time goes on, Apple's conservative approach means they will be the only firm trusted with its cash balance. They are very nicely seated despite all the furore they've had to endure.
It wanted to replace all the little bits of me that were in there.
The first requires intention, something that as far as we know, LLMs simply cannot truly have or express. The second is something that can be approximated. Perhaps very well, but a mass of people using the same models with the same approximationa still lead to loss of distinction.
Perhaps LLMs that were fully individually trained could sufficiently replicate a person's quirks (I dunno), but that's hardly a scalable process.
This also reminded me that on OpenRouter, you can sort models by category. The ones tagged "Roleplay" and "Marketing" are probably going to have better writing compared to models like Opus 4 or ChatGPT 5.2.
[1]: https://www.techradar.com/ai-platforms-assistants/sam-altman...
"Update the dependencies in this repo"
"Of course, I will. It will be an honor, and may I say, a beautiful privilege for me to do so. Oh how I wonder if..." vrs "Okay, I'll be updating dependencies..."
Why was the title of of the link on HackerNews updated to remove the term "Dangerous"?
The term was in the link on HackerNews for the first hour or so that this post was live.
I'm not sure what's driving this. It reminds me of SEO.
(Obviously a different question from "is an AI lab willing to release that publicly” ;)
https://nostalgebraist.tumblr.com/post/778041178124926976/hy...
https://nostalgebraist.tumblr.com/post/792464928029163520/th...
(Not necessarily disagreeing with those claims, but I'd like to see a more robust exploration of them.)
https://news.ycombinator.com/item?id=46583410#46584336
https://news.ycombinator.com/item?id=46605716#46609480
https://news.ycombinator.com/item?id=46617456#46619136
https://news.ycombinator.com/item?id=46658345#46662218
https://news.ycombinator.com/item?id=46630869#46663276
https://news.ycombinator.com/item?id=46656759#46663322
I disagree pretty strongly with most of what an LLM suggests by way of rewriting. They're absolutely appalling writers. If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.
The skin of their prose lacks the luminous translucency, the subsurface scattering, that separates the dead from the living.
Strong agree, which is why I disagree with this OP point:
“Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.”
I see enough jargon in everyday business email that in the office zero-shot LLM unspoolings can feel refreshing.
I have "avoid jargon and buzzwords" as one of very tiny tuners in my LLM prefs. I've found LLMs can shed corporate safespeak, or even add a touch of sparkle back to a corporate memo.
Otherwise very bright writers have been "polished" to remove all interestingness by pre-LLM corporate homogenization. Give them a prompt to yell at them for using 1-in-10 words instead of 1-in-10,000 "perplexity" and they can tune themselves back to conveying more with the same word count. Results… scintillate.
AI has been great for removing this stress. "Tell Joe no f'n way" in a professional tone and I can move on with my day.
Lol no. Might be great for you as a consumer who is using these products for free. But expand the picture more.
Cpl. Barnes: Well, Lt. Kaffee, that's not in the book, sir.
Kaffee: You mean to say in all your time at Gitmo, you've never had a meal?
Cpl. Barnes: No, sir. Three squares a day, sir.
Kaffee: I don't understand. How did you know where the mess hall was if it's not in this book?
Cpl. Barnes: Well, I guess I just followed the crowd at chow time, sir.
Kaffee: No more questions.
It has all the tropes of not understanding the underlying mechanisms, but repeating the common tropes. Quite ironic, considering what the author's intended "message" is. Jpeg -> jpeg -> jpeg bad. So llm -> llm -> llm must be bad, right?
It reminds me of the media reception of that paper on model collapse. "Training on llm generated data leads to collapse". That was in 23 or 24? Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. That's not how any of it works. Yet everyone has an opinion on how bad it works. Jesus.
It's insane how these kinds of opinion pieces get so upvoted here, while worth-while research, cool positive examples and so on linger in new with one or two upvotes. This has ceased to be a technical subject, and has moved to muh identity.
(I'm frequently guilty of that too.)
Maybe because researchers learned from the paper to avoid the collapse? Just awareness alone often helps to sidestep a problem.
Same with the "llms don't reason" from "Apple" (two interns working at Apple, but anyway). The media went nuts over it, even though it was littered with implementation mistakes and not worth the paper it was(n't) printed on.
So many AI generated AI bashing articles lately. I wrote a post complaining about running into these, and asking people who've sent me these AI articles multiple of them came from HN. https://lunnova.dev/articles/ai-bashing-ai-slop/
> Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.
> The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a "standardized" readability score, leaving behind a syntactically perfect but intellectually void shell.
What a fantastic description of the mechanisms by LLMs erase and distort intelligence!
I agree that AI writing is generic, boring and dangerous. Further, I only think someone could feel this way if they don't have a genuine appreciation for writing.
I feel strongly that LLMs are positioned as an anti-literate technology, currently weaponized by imbeciles who have not and will never know the joy of language, and who intend to extinguish that joy for any of those around them who can still perceive it.
If you thought Google's degredation of search quality was strategic manipulation, wait till you see what they do with tokens.
Not to detract from the overall message, but I think the author doesn't really understand Romanesque and Baroque.
(as an aside, I'd most likely associate Post-Modernism as an architectural style with the output of LLMs - bland, regurgitative, and somewhat incongruous)
At any rate, it seems to me like a reasonable label for what's described:
> Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).
> ...
> When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation.
The metaphor is very apt. Literal polishing is removal of outer layers. Compared to the near-synonym "erosion", "ablation" connotes a deliberate act (ordinarily I would say "conscious", but we are talking about LLMs here). Often, that which is removed is the nuance of near-synonyms (there is no pause to consider whether the author intended that nuance). I don't know if the "character" imparted by broader grammatical or structural choices can be called "semantic", but that also seems like a big part of what goes missing in the "LLM house style".
Bluntly: getting AI to "improve" writing, as a fully generic instruction, is naturally going to pull that writing towards how the AI writes by default. Because of course the AI's model of "writing quality" considers that style to be "the best"; that's why it uses it. (Even "consider" feels like anthropomorphizing too much; I feel like I'm hitting the limits of English expressiveness here.)
Then the model will look for clusters that don't fit what the model consider's to be Hemingway/Colliers/Post-War and suggest in that fashion.
"edit this" -> blah
"imagine Tom Wolfe took a bunch of cocaine and was getting paid by the word to publish this after his first night with Aline Bernstein" -> probably less blah
the words TFA is looking for is mode collapse https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-... and the author could herself learn to write more clearly.
I agree that the term "semantic ablation" is difficult to interpret
But the article describes three mechanisms by which LLMs consistently erase and distort information (Metaphoric Cleansing, Lexical Flattening, and Structural Collapse)
The article does not describe best practices; it's a critique of LLM technology and an analysis of the issues that result from using this technology to generate text to be read by other people.
Do we see this in programming too? I don't think so? Unique, rarely used API methods aren't substituted the same way when refactoring. Perhaps that could give us a clue on how to fix that?
Etc.
Is it possible that what is a good result to you is a pity to someone with more developed taste?
Really?
Here's some alternatives. Some are clunky. But, some aren't.
…just like you can tell whose pubes those are on the shared bar of soap without launching a formal investigation.
…just like you can tell who just wanked in the shared bathroom by the specific guilt radiating off them when they finally emerge.
…just like you can tell which of your mates just shitted at the pub by who's suddenly walking like they're auditioning for a period drama.
…just like you can tell which coworker just had a wank on their lunch break by the post-nut serenity that no amount of hand-washing can disguise.
…just like you can tell whose sneeze left that slug trail on the conference room table by the specific way they're not making eye contact with it.
…just like you can identify which flatmate's cum sock you've accidentally stepped on by the vintage of the crunch.
…just like you can tell who just crop-dusted the elevator by the studied intensity with which one person is suddenly reading the inspection certificate.
We dont need more average stuff - below average output serves as a proxy for one to direct their resources towards producing output of higher-value.
I have a very large ‘default prompt’ that explicitly deals with the more obnoxious grammatical structures emblematic of LLMs.
I would wager I deal with more amateurishly created AI slop on a doily basis than you do. (Legal field, where everyone is churning out LLM-written briefs.) Most of it is instantly recognizable. And, all of it can be fixed with more careful prompt-engineering.
If you think you can spot well-crafted LLM prose generated by someone proficient at the craft of prompt-engineering by, to use an analogy to the early days of image creation, counting how many fingers the hand has, you’re way behind.