Well, this is very interesting, because I'm a native English speaker that studied writing in university, and the deeper I got into the world of literature, the further I was pushed towards simpler language and shorter sentences. It's all Hemingway now, and if I spot an adverb or, lord forbid, a "proceeded to," I feel the pain in my bones.
The way ChatGPT writes drives me insane. As for the author, clearly they're very good, but I prefer a much simpler style. I feel like the big boy SAT words should pop out of the page unaccompanied, just one per page at most.
… though, yes, in average hands a “proceeded to”, and most of the quoted phrases, are garbage. Drilling the average student on trying to make their language superficially “smarter” is a comically bad idea, and is indeed the opposite of what almost all of them need.
> strode purposefully
My wife (a writer) has noticed that fanfic and (many, anyway—plus, I mean, big overlap between these two groups) romance authors loooove this in particular, for whatever reason. Everyone “strides” everywhere. No one can just fucking walk, ever, and it’s always “strode”. It’s a major tell for a certain flavor of amateur.
"He strode up to Helen and asked, 'What are you doing?'"
"He sidled up to Helen and asked, 'What are you doing?'"
"He tromped up to Helen and asked, 'What are you doing?'"
Each of those sentences conveys as slightly different action. You can almost imagine the person's face has a different expression in each version.
Yes, I hate it when amateurs just search/replace by thesaurus. But I think different words have different connotations, even if they mean roughly the same thing. Writing would be poorer if we only ever used "walk".
We don't really do it intentionally in English, at least to the same degree. But there's still a lot of information coded in our word and grammar choices.
Conveying meaning is the whole problem here. An unexpected word choice is a neon sign saying "This is important!" and it disappoints the reader if it is not.
People shouldn't use "strides" just because "walked" is boring. They should use "strides" when it's meaningful in the context of the story.
I’m biased because I am not a very good writer, but I can see why in a book you might want to hint at how someone walked up to someone else to illustrate a point.
When writing articles to inform people, technical docs, or even just letters, don’t use big vocabulary to hint at ideas. Just spell it out literally.
Any other way of writing feels like you are trying to be fancy just for the sake of seeming smart.
Spelling it out literally is precisely what the GP is doing in each of the example sentences — literally saying what the subject is doing, and with the precision of choosing a single word better to convey not only the mere fact of bipedal locomotion, but also the WAY the person walked, with what pace, attitude, and feeling.
This carries MORE information about in the exact same amount of words. It is the most literal way to spell it out.
A big part of good writing is how to convey more meaning without more words.
Bad writing would be to add more clauses or sentences to say that our subject was confidently striding, conspiratorially sidling, or angrily tromping, and adding much more of those sentences and phrases soon gets tiresome for the reader. Better writing carries the heavier load in the same size sentence by using better word choice, metaphor, etc. (and doing it without going too far the other way and making the writing unintelligibly dense).
Think of "spelling it out literally" like the thousand-line IF statements, whereas good writing uses a more concise function to produce the desired output.
An author's word choices can certainly fail to convey intended meaning, or convey it too slowly because they are too obscure or are a mismatch for the the intended audience — that is just falling off the other side of the good writing tightrope.
At technical paper is an example where the audience expects to see proper technical names and terms of art. Those terms will slow down a general reader who will be annoyed by the "jargon" but it would annoy every academic or professional if the "jargon" were edited out for less precise and more everyday words. And vice versa for the same topic published in a general interest magazine.
So, an important question is whether you are part of the intended audience.
Brevity is the soul of good communication.
"He waddled up to Helen and asked, 'What are you doing?'"
"He kick-flipped up to Helen and asked, 'What are you doing?'"
[edit] electric-slid! Pirouetted! Somersaulted!
The only "stride" I know relates to the gap betweeb heterogeneous elements in a contiguous array
> The only "stride" I know relates to the gap betweeb heterogeneous elements in a contiguous array
I am also not a native English speaker, but I got to know the verb to "to stride" from The Lord of the Rings: Aragorn is originally introduced under the name "Strider":
> https://en.wikipedia.org/w/index.php?title=Aragorn&oldid=132...
"Aragorn is a fictional character and a protagonist in J. R. R. Tolkien's The Lord of the Rings. Aragorn is a Ranger of the North, first introduced with the name Strider and later revealed to be the heir of Isildur, an ancient King of Arnor and Gondor."
(native english speaker who was a bookworm as a kid; I admittedly had to ask gemini to recall the general phrase that I had in mind)
I mean, it seems like it could work if you get to follow it up with a "de-education" step. Phase 1: force them to widen their vocabulary by using as much of it as possible. Phase 2: teach them which words are actually appropriate to use.
> This style has a history, of course, a history far older than the microchip: It is a direct linguistic descendant of the British Empire. The English we were taught was not the fluid, evolving language of modern-day London or California, filled with slang and convenient abbreviations. It was the Queen's English, the language of the colonial administrator, the missionary, the headmaster. It was the language of the Bible, of Shakespeare, of the law. It was a tool of power, and we were taught to wield it with precision. Mastering its formal cadences, its slightly archaic vocabulary, its rigid grammatical structures, was not just about passing an exam.
> It was a signal. It was proof that you were educated, that you were civilised, that you were ready to take your place in the order of things.
Much of writing style is not about conveying meaning but conveying the author's identity. And much of that is about matching the fashion of the group you want to be a member of.
Fashion tends to go through cycles because once the less prestigious group becomes sufficiently skilled at emulating the prestige style, the prestigious need a new fashion to distinguish themselves. And if the emulated style is ostentatious and flowery, then the new prestige style will be the opposite.
Aping Hemingway's writing style is in a lot of ways like $1,000 ripped jeans. It sort of says "I can look poor because I'm so rich I don't even have to bother trying to look rich."
(I agree, of course, that there is a lot to be said for clean, spare prose. But writing without adverbs doesn't mean one necessarily has the clarity of thought of Hemingway. For many, it's just the way you write so that everyone knows you got educated in a place that told you to write that way.)
Reading though my old self-reviews it basically is exactly like your examples. Making sentences longer just to make your story more interesting.
Because at the end your promotion wasn't about what you achieved. It was about your story and how 7 people you didn't know voted on it.
"No weasel words!"
Taught?
But hey, at least you know I didn't use ChatGPT to conjure that comment.
But the voice in my head does not.
Pedantry is what makes me better.
(1) writing to communicate ideas, in which case simpler is almost always better. There's something hypnotic about simple writing (e.g. Paul Graham's essays) where information just flows frictionlessly into your head.
(2) writing as a form of self-expression, in which case flowery and artistic prose is preferred.
Here's a good David Foster Wallace quote in his interview with Bryan Garner:
> "there’s a real difference between writing where you’re communicating to somebody, the same way I’m trying to communicate with you, versus writing that’s almost a well-structured diary entry where the point is [singing] “This is me, this is me!” and it’s going out into the world.
Or just find the appropriate 'simple' word, which is very often available.
I could say "Trump's unpredictable, seemingly irrational policy choices have alienated our allies, undermined trust in public institutions, and harmed the US economy"
Or I could "The economy sucks and it's Trump's fault because he's dumb and an asshole"
They both communicate the same broad idea - but which communicates it better? It depends on the audience.
Ugh. They say different things. The first describes the policy mechanisms and impacts. The second says nothing about those things; it describes your emotions.
The biggest communication problem I see now is people, especially on the Internet, including on HN, use the latter for the former purpose and say nothing.
Many all-time great writers, Hemingway being the leading exemplar, completely disagree.
I'd characterise Americans as less pretentious and more straight talking.
This kind flowery language is typical (or symptomatic depending on diagnosis) of how English people actually used to speak and write.
The average English vocabulary has dwindled noticeably in my life.
Various registers representing a huge proportion of US English we see and hear day-to-day are terrible. American “Business English” is notably bad, and is marked by this sort of fake-fancy language. The dialect our cops use is perhaps even worse, but at least most of us don’t have to read or hear it as much as the business variety.
If you mean 'communicate information', no. Communication, including written, is for emotion, social expression, and other things before information.
Even information requires those other things to be retained well.
Ugh, and journalists often slip into cop dialect in their articles. It's disgustingly propagandic.
Notice that cops never kill or shoot someone, even in situations where they're blatantly in the wrong. It's always, "service weapon was discharged" or "subject was fired upon." Make sure to throw a couple "proceeded to's" in there for good measure.
Image: https://media.snopes.com/2016/09/looting.jpg
Snopes: https://www.snopes.com/fact-check/hurricane-katrina-looters/
my significant other loves the "real life mormon housewives" and "lovingly blind" reality shows, and when they use business english (a weird thing to do when talking about relationships, but hey, what do I know I'm an engineer) it's a tell that they're lying.
Never thought of Strunk & White as being distinctly American, but I guess you have a point.
> “the American MFA system, spearheaded by the infamous Iowa Writers’ Workshop” as a “content farm” first designed to optimize for “the spread of anti-Communist propaganda through highbrow literature.” Its algorithm: “More Hemingway, less Dos Passos.”
https://www.openculture.com/2018/12/cia-helped-shaped-americ...
Then you start learning more & more abstraction (classes, patterns, monads...).
In the end you strive to write simple code, just like at the beginning.
Why did you say you were "pushed towards" simpler language instead of "I liked it more"?
Why did you say "I feel the pain in my bones" and "drives me insane" instead of "I dislike it"?
Why did you say "the big boy SAT words should pop out of the page unaccompanied" instead of "there should only be one big word per page"?
Perhaps flowery language expands your ability to express yourself?
As a native English speaker who studied writing at university, do you think "who" should be used with people while "that" should only be used with things or the other way round. Or should I just not care?
Edit: missing things
Use 'who' with people especially, often with other living beings ('my dog, who runs away daily, always is home for dinner') or groups of them ('the NY Yankees, who won the championship that year, were my favorite'), but never with objects unless pretending they live ('my stuffed bear, who sleeps in my bed, wakes me every morning').
If you care about these things, the Chicago Manual of Style is a large but highly respected guide. A shorter one is The Elements of Style by Strunk & White. You can find both on the Internet Archive, I'm almost certain.
tbh I kind of prefer it that way: it's an AI wrote this flag. If a human can't write about their day without constructs like "Not a short commute, but a voyage from the suburbs to the heart of the city. I don't just casually pop in to the office; I travel to the hub of $company's development" they need to get better at writing too
Tangent: the thing I find most annoying about ChatGPT's use of em-dashes is that it never even uses them for the one thing they're best suited for. ChatGPT's em-dashes could almost always be replaced with a colon or a comma.
But the true non-redundant-syntax use of em-dashes in English prose, is in the embedding into a sentence of self-interruptive 'joiner' sub-sentences that can themselves bear punctuated sub-clauses. "X—or Y, maybe—but never Z" sorta sentences.
These things are spoken entirely differently than — and on the page, they read entirely differently to — regular parenthetical-bearing sentences.
No, seriously, compare/contrast: "these things are spoken entirely differently than (and on the page, they read entirely differently to) regular parenthetical-bearing sentences."
Different cadence; different pacing; possibly a different shade of meaning (insofar as the emotional state of the author/speaker is part of the conveyed message.)
But, for some reason, ChatGPT just never constructs these kinds of self-interruptive sentences. I'm not sure it even knows how.
> No, seriously, compare/contrast: "these things are spoken entirely differently than (and on the page, they read entirely differently to) regular parenthetical-bearing sentences."
Those are spoken the same way, they read the same way, and they mean the same thing.
Aside: it's probably just style (maybe some style guides call for the way you did it), but using em-dashes for this purpose with whitespace on each side of them looks/feels wrong to me. Anyone know if that's regional or something?
Parentheses to me always feel like the speaker switching to camera #3 while holding a hand up to their mouth conspiratorially.
Em dashes are same-camera with maybe some kind of gesticulation such as pointing or hands up, palms down, then palms up when terminating the emdash clause.
• The length of the verbal pause is different. (It's hard to quantify this, as it's relative to your speaking rate, which can fluctuate even within a sentence. But I can maybe describe it in terms of meter in poetry/songwriting: when allowed to, a parenthetical pause may be read to act as a one-syllable rest in the meter of a poem, often helpfully shifting the words in the parenthetical over to properly end-align a pair of rhyming [but otherwise misaligned] feet. An em-dash, on the other hand, acts as only a half-syllable rest; it therefore offsets the meter of the words in the subclause that follow, until the closing em-dash adds another half-syllable rest to set things right. This is in part why ChatGPT's favored sentences, consisting of "peer" clauses joined by a single em-dash, are somewhat grating to mentally read aloud; you end up "off" by a half-syllable after them, unless you can read ahead far enough to notice that there's no closing em-dash in the sentence, and so allow the em-dash-length pause to read as a semicolon-length pause instead.)
• The voicing of the last word before the opening parenthesis / first em-dash starts is different. (paren = slow down for last few words before the paren, then suddenly speed up, and override the word's normal tonal emphasis with a last-syllable-emphasized rising tone + de-voicing of vowels; em-dash = slow down and over-enunciate last few words before the em-dash, then read the last syllable before the em-dash louder with a overridden falling voiced tone)
• The speed at which, and vocal register with which, the aside / subclause is read is different. (parens = lowest register you can comfortably speak at, slightly quieter, slightly faster than you were delivering the toplevel sentence; em-dashes = delivery same speed or slower, first few syllables given overridden voiced emphasis with rising tone from low to normal, and last few syllables given overridden voiced emphasis with falling tone from normal to low)
• The voicing of the first words after the subclause ends is different. (closing paren = resume speaking precisely as if the parenthetical didn't happen; second em-dash = give a fast, flat-low nasally voiced performance of the first one or two syllables after the em-dash.)
To describe the overall effect of these tweaks:
A parenthetical should be heard as if embedded into the sentence very deliberately, but delivered as an aside / tangent, smaller and off-to-the-side, almost an "inlined footnote", trying to not distract from the point, nor to "blow the listener's stack" by losing the thread of the toplevel point in considering it.
An em-dash-enclosed interruptive subclause should read like the speaker has realized at the last moment that they have two related points to make; that they are seemingly proceeding, after a stutter, to finish the sentence with the subclause; but that they are then "backing up" and finishing the same sentence again with the toplevel clause. The verbalization should be able to be visualized as the outer sentence being "squashed in" to "make room" for the interruptive subclause; and the interruptive subclause "squashing at the edges" [tonally up or down, though usually down] to indicate its own "squeezed in" beginning and end edges.
Note that this isn't subjective/anecdotal descriptions from how I speak myself. These are actually my attempt to distill vocal coaching guidelines I've learned for:
• live sight-reading of teleprompter lines containing these elements, as a TV show host / news anchor
• default-assumed directorial expectations for lines containing elements like these, when giving screenplay readings as a [voice] actor (before any directorial "notes" come into play)
As GPT would say, "You've hit upon a crucial point underlying the entire situtation!"
Like, if I ask GPT5 to convert 75f to celsius, it will say "OK, here's the tight answer. No fluff. Just the actual result you need to know." and then in a new graf say "It's 23.8c." (or whatever).
As an aside, I've noticed the self-description happens even more often when extended thinking mode is being used. My unverified intuition is that it references my custom instructions and memory more than once during the thinking process, as it then seems more primed than usual to mimic vocabulary from any saved text like that.
I think the fluff, the emojis, the sycophancy is all symptomatic of the training process and human feedback.
I was raised to be respectful by "getting to the point, afap" to avoid wasting anybody's time.
But I've noticed that mostly only the members of the science and legal community exercising similar principles.
"He then proceeded to" in these situations can basically always just be "he (verb)".
Huh, apparently 'proceeded' was used more commonly in 19th-century writing [1].
[1] https://books.google.com/ngrams/graph?content=Proceeded&year...
> I did the work.
> I worked.
I don't really see the difference between the two though.
Compare:
1. I did the work for that last week.
2. I proceeded to do the work for that last week.
Sentence 2 strikes me as questionably grammatical. It needs to be proceeding from something in the context.
Wording? Don't you mean diction?
I didn't claim that this was exclusively American. Though I'd have to admit that one doesn't have to be American to adopt Ameracanisms: rhotic Rs, Netflix color-grading, and copy-cat political movements are other American cultural artifacts showing up across the world due to America's dominance of the zeitgeist.
Rap verses in pop songs wasn't a spontaneously phenomenon across the globe, the origins are tracably American - but that doesn't make all rappers American.
>I proceeded to open the fridge
>I went to open the fridge
or
>I proceeded to flush the toilet
>I then flushed the toilet
There's nothing wrong with "proceeded", it's just one of those things that's overused by bad writers.
Only a handful of words ("got", "y'know" and "fuck") rival its versatility.
I'm the complete opposite. Hemingway ruined writing styles (and I have a pet theory that his, and Plain English, short sentences also helped reduce literacy in the long run in a similar way TikTok ruins attention spans). I'm a 19th century reader at heart. Give me Melville, Eliot, Hawthorne, though keep your Dickens.
I tend to struggle with art when I can’t tell whether it’s supposed to be funny, but I’m finding it funny (I’ve been very slow to warm up to hip-hop for this reason, and metal remains inaccessible to me because of it). Something clicked on that second approach and I just got that yes, it’s pretty much all supposed to be funny, down to every word, even when it seems serious—until, perhaps, he blind-sides you with something actually deeply affecting and human (I think about the fire-fighting sequence from that book all the time).
Dickens is an all-dessert meal, except sometimes he sneaks a damn delicious steak right in the middle. Like, word-for-word, I’d say he leans harder into humor, by a long shot, than someone like Vonnegut, even. But almost all of it’s dead-pan, and some of it’s the sort of humor you get when someone who knows better does poorly on purpose, in calculated ways. If you ever think you’re laughing at him, not with… I reckon you’re probably wrong.
What’s perhaps most miraculous about this turn-around is that I usually don’t enjoy comedic novels, but once I figured Dickens out, he works for me.
(To your broader point—yeah, agreed that this sucks, good advice for bad writers becoming how most judge all writers has been harmful)
Very much the same; many a US writer's prose is terribly tedious, it comes across just as clinical as their HOA-approved suburban hellscapes. Somebody once told me a writer's job is also to expand language. It wasn't a US citizen.
Language is like clothing.
Those with no taste - but enough money - will dress in gaudy ways to show off their wealth. The clothing is merely a vector for this purpose. They won’t use a piece of jewelry only if it contributes to the ensemble. Oh, no. They’ll drape themselves with gold chains and festoon their fingers with chunky diamond rings. Brand names will litter their clothing. The composition will lack intelligibility, cohesiveness, and proportion. It will be ugly.
By analogy, those with no taste - but enough vocabulary - will use words in flashy ways to show off their knowledge. Language is merely a vector for this purpose. They won’t use a word only if it contributes to the prose. Oh, no. They’ll drape their phrases with unnecessarily unusual terms and festoon their sentences with clumsy grammar. Obfuscation, rather than clarity, will define their writing. The composition will lack intelligibility, cohesiveness, and proportion. It will be ugly.
As you can see, the first difference is one of purpose: the vulgarian aims for the wrong thing.
You might also say that the vulgarian also lacks a kind of temperance in speech.
You got the first bit right. Language and clothing accord to fashions.
What counts as gaudy versus grounded, discreet versus disrespectful—this turns on moving cultural values. And those at the top implicitly benefit from this drift, which lets us dismiss as gaudy someone wearing a classic hand-me-down who isn’t clued into a hoodie and jeans being the surfer’s English to Nairobi’s formality.
(Spiced food was held in high regard in ancient Rome and Medieval European courts. Until spices became plentiful. Then the focus shifted "to emphasize ingredients’ natural flavors" [1]. A similar shift happened as post-War America got rich. Canned plenty and fully-stocked pantries made way for farm-to-table freshness and simple seasonings. And now, we're swinging back towards fuller spice cabinets as a mark of global taste.)
[1] https://historyfacts.com/world-history/article/how-did-salt-...
They'll then spend the first few years of their career unlearning this and attempting to write as directly and clearly as possible with as few words as possible.
The ideas, concepts and expectations can be refined after you've learned the foundational knowledge, skills and history required to do so.
A lot of "why do we do things like that" questions students will naturally have can be answered with "because we used to do things like this/we need to avoid things like this/etc"
The problem is that teachers stop pushing complexity for complexity’s sake way to late.
[0] https://www.theverge.com/features/23764584/ai-artificial-int...
ChatGPT :|
ChatGPT (japan) XD
- Do not confuse 'night' with 'evening'.
- This office spells it 'programme'.
- Hotels are 'kept', not 'run'.
- Dead men do not leave 'wives', but they may leave 'widows'.
- 'Very' is a word often used without discrimination. It is not difficult to express the same meaning when it is eliminated.
- The relative pronoun 'that' is used about three times superfluously to the one time that it helps the sense.
- Do not write 'this city' when you mean Chicago.
[1] https://www.merriam-webster.com/grammar/very-unique-and-abso...
Update: To illustrate this, here's a comparison of a paragraph from this article:
> It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.
And ChatGPT's "improvement":
> This is a new frontier of an old struggle: the struggle to be seen, to be understood, to be granted the easy presumption of humanity that others receive without question. My writing is not the product of a machine. It is the product of history—my history. It carries the echo of a colonial legacy, bears the imprint of a rigorous education, and stands as evidence of the labor required to master the official language of my own country.
Yes, there's an additional em-dash, but what stands out to me more is the grandiosity. Though I have to admit, it's closer than I would have thought before trying it out; maybe the author does have a point.
As a reader, I persistently feel like I just zoned out. I didn't. It's just the mind responding to having absorbed zero information despite reading a lot of–at face value–text that seems like it was written with purpose.
The main difference in the author's writing to LLM I see is that the flourish and the structure mentioned is used meaningfully, they circle around a bit too much for my taste but it's not nearly as boring as reading ai slop which usually stretch a simple idea over several paragraphs
Because you told it to improve it. Modern LLMs are trained to follow instructions unquestioningly, they will never tell you "you told me to do X but I don't think I should", they'll just do it even if it's unnecessary.
If you want the LLM to avoid making changes that it thinks are unnecessary, you need to explicitly give it the option to do so in your prompt.
A reasonable human, given the same task, wouldn't just make arbitrary changes to an already-well-composed sentence with no identified typos and hope for the best. They would clarify that the sentence is already generally high-quality, then ask probing questions about any perceived issues and the context in and ends to which it must become "better".
Without shift it's an en dash (–), with shift an em dash (—). Default X11 mapping for a German keyboard layout, zero config of mine.
Angled quotes I use only on systems on which I've configured a compose key, or Android when I'm typing Chinese.
I don't like any kind of auto-replacement with physical keyboards, so I turn off "smart quotes" on macOS.
Anyway I use characters like that all the time, but it's never auto-replace.
I am, it's on the default German X11 keyboard layout. Same for · × ÷ …
And that's without going to the trusty compose key (Caps Lock for me)… wonders like ½ and H₂O await!
I've had a "trigger finger" for Alt+0151 on Windows since 2010 at least.
I'm sure there's some voice actor out there who can't get work because they sound too similar to the generated voices that appear in TikTok videos.
I suppose I don’t mind people using AI voices if they have a thick accent or are shy about their voice, but if I’m watching a video and clock the voice as AI (usually because the tone is professional but has no expression and then the speaker mispronounces a common word or acronym) it does make me start to wonder if the script is AI. There are a lot of people churning out tutorials that seem useful at first but turn out to have no content (“draw the rest of the owl” type stuff) because they asked AI to create a tutorial for something and didn’t edit or reprompt based on the output. The video essay world is also starting to get hit pretty hard, to the point that I’m less willing than ever to watch content unless I already know the creator’s work.
[0] Shameless plug: https://youtu.be/PGiTkkMOfiw
His responses in Zoom Calls were the same mechanical and sounds like AI generated. I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written and gave points to why it believes this message was AI written.
When I showed the response to the colleague he swore that he was not using ant AI to write his responses. I believe after he said to me it was not AI written. And now reading this I can imagine that it's not an isolated experience.
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.
People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years
"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."
Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.
Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.
https://www.benzinga.com/markets/tech/25/12/49323477/openais...
At the same time, their interpretation doesn’t seem that far off. As per your comment, Sam said he “cannot imagine figuring out how” which is pretty close to admitting he’s clueless how anyone does it, which is what your parent comment said.
It’s the difference between “I don’t know how to paint” and “I cannot imagine figuring out how to paint”. Or “I don’t know how to plant a garden” and “I cannot imagine figuring out how to plant a garden”. Or “I don’t know how to program” and “I cannot imagine figuring out how to program”.
In the former cases, one may not know specifically how to do them but can imagine figuring those out. They could read a book, try things out, ask someone who has achieved the results they seek… If you can imagine how other people might’ve done it, you can imagine figuring it out. In the latter cases, it means you have zero idea where to start, you can’t even imagine how other people do it, hence you don’t know how anyone does do it.
The interpretation in your parent comment may be a bit loose (again, I disagree with the use of “literally”, though that’s a lost battle), but it is hardly unfair.
>Clearly, people did it for a long time, no problem.
In fact means Altman thinks the exact opposite of "he didn't know how anyone could raise a baby without using a chatbot" - what he means is that while it's not imaginable, people make do anyway, so clearly it very much is possible to raise kids without chatgpt.
What the gp did is the equivalent of someone saying "I don't believe this, but XYZ" and quoting them as simply saying they believe XYZ. People are eating it up though because it's a dig at someone they don't like.
> The OpenAI CEO said he "got a great answer back" and was told that it was normal for his son not to be crawling yet.
To be fair, that is a relatable anxiety. But I can't imagine Altman having the same difficulties as normal parents. He can easily pay for round the clock childcare including during night-times, weekends, mealtimes, and sickness. Not that he does, necessarily, but it's there when he needs it. He'll never know the crushing feeling of spending all day and all night soothing a coughing, congested one-year-old whilst feeling like absolute hell himself and having no other recourse.
He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…
https://www.startupbell.net/post/sam-altman-told-investors-b...
https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...
https://futurism.com/artificial-intelligence/sam-altman-cari...
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?
I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.
Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.
They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).
They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?
People googled this stuff before, but a basic search doesn’t respond with you about how it’s right and consistently feed you emotionally bad info in the same fashion.
But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.
If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!
https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...
If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.
https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.
I can't blame others though- I was looking at notes I wrote in 2019 and even that gave me a flavor of looking like a ChatGPT wrote it. I use the word "delve" and "not just X but also Y often, according to my Obsidian. I've taken to inserting the occasional spelling mistake or Unorthodox Patterns of Writing(tm), even when I would not otherwise.
It's a lot easier to get LLMs to adhere to good writing guides than it is to get them to create something informative and useful. I like to think my notes and writing are informative and useful.
This would have been my first question to the parent, that I guess he never had similar correspondence with this friend prior to 2023. Otherwise it would be hard to convince me without an explanation for the switch (transition duuing formative high school / college years etc).
... How does that work, exactly?
This sucks, but it needs to be done in education, and/or at least in areas where good writing and effective communication is considered important. Good grades need to be awarded only to writing that exceeds the quality and/or personality of a chat-bot, because, otherwise, the degree is being awarded to a person who is no more useful than a clumsy tool.
And I don't mean avoiding superficialities like the em-dash: I mean the bland over-verbosity and other systemic tells—or rather, smells—of AI slop.
How dare they.
Some things I've learned/realized from this thread:
1. You can make an em-dash on Macs using -- or a keyboard shortcut
2. On Windows you can do something like Alt + 0151 which shows why I have never done it on purpose... (my first ever —)
3. Other people might have em-dashes on their keyboard?
I still think it's a relatively good marker for ChatGPT-generated-text iff you are looking at text that probably doesn't apply to the above situations (give me more if you think of them), but I will keep in mind in the future that it's not a guarantee and that people do not have the exact same computer setup as me. Always good to remember that. I still do the double space after the end of a sentence after all.
(And as #9 on the leaderboard, I feel the need to defend myself!)
Its overuse is definitely a marker of either AI or a poorly written body of text. In my opinion, if you have to rely on excessive parentheticals, then you are usually off restructuring your sentences to flow more clearly.
When I copy and pasted them in it failed obviously so... yeah. If you have terminal commands that use `--` don't copy+paste them out of notepad.
Eg, I was typing Alt-0151 and Alt-0150 (en-dash) on the reg in my middle school and high school essays along with in AIM. While some of my classmates were probably typing double hyphens, my group of friends were using the keyboard shortcuts, so I am now learning from this "detect an LLM" faze that there's a vocal group of people who do not share this experience or perspective of human communication. And that having a mother who worked in technical publishing who insisted I use the correct punctuation rather than two hyphens was not part of everyone's childhood.
The formal part resonates, because most non-native english speaker learnt it at school, which teaches you literary english rather than day-to-day english. And this holds for most foreign languages learnt in this context: you write prose, essays, three-part prose with an introduction and a conclusion. I've got the same kind of education in france, though years of working in IT gave me a more "american" english style: straight to the point and short, with a simpler vocabulary for everyday use.
As for whether your writing is ChatGPT: it's definitely not. What those "AI bounty hunters" would miss in such an essay: there is no fluff. Yes, the sentences may use the "three points" classical method, but they don't stick out like a sore thumb - I would not have noticed should the author had not mentioned it. This does not feel like filling. Usually with AI articles, I find myself skipping more than half of each paragraph, due to the information density - just give me the prompt. This article got me reading every single word. Can we call this vibe reading?
We will all soon write and talk like ChatGPT. Kids growing up asking ChatGPT for homework help, people use it for therapy, to resumes, for CVs, for their imaginary romantic "friends", asking every day questions from the search engine they'll get some LLM response. After some time you'll find yourself chatting with your relative or a coworker over coffee and instead of hearing, "lol, Jim, that's bullshit" you'll hear something like "you're absolutely right, here let me show you a bulleted list why this is the case...". Even more scarier, you'll soon hear yourself say that to someone, as well.
(check-mark emoji) Add more emoji — humans love them! (red x emoji) Avoid negative words like "bullshit" and "scarier."
(thumbs-up emoji) Before long you'll get past the human feedback of reinforcement learning! (smiley-face)
I just saw someone today that multiple people accused of using ChatGPT, but their post was one solid block of text and had multiple grammar errors. But they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes.
https://www.theguardian.com/technology/2024/apr/16/techscape...
They said nigerian but there may be a common way English is taught in the entire area. Maybe the article author will chip in.
> ChatGPT is designed to write well
If you define well as overly verbose, avoiding anything that could be considered controversial, and generally sycophantic but bland soulless corporate speak, yes.
Nigeria and Kenya are two very different regions with different spheres of business. I don't know, but I wouldn't expect the English to overlap that much.
Cyprus, Somalia, Sierra Leone, Kuwait, Tanzania, Jamaica, Trinidad and Tobago, Uganda, Kenya, Malawi, Zambia, Malta, Gambia, Guyana, Botswana, Lesotho, Barbados, Yemen, Mauritius, Eswatini (Swaziland).
If what you're saying is right then you'd have to admit Jamaican and Barbados English are just the same as Kenyan or Nigerian... but they're not. They're radically different because they're radically different regions. Uganda and Kenya being similar is what I would expect, but not necessarily Nigeria...
They're radically different predominantly at the street level and everyday usage, but the kind of professional English of journalists, academics and writers that the author of the article was surrounded by is very recognizable.
You can tell an American from an Australian on the beach but in a journal or article in a paper of record that's much more difficult. Higher ed English with its roots in a classical British education you can find all over the globe.
Go read some Kenyan news. It's very obvious.
All we can hope is for a local to show up and explain.
I have a confession to make: I didn't think lulcat speak was funny, even at the time.
It's pretty annoying and once you catch them doing it, you can't stop.
Writing well is about communicating ideas effectively to other humans. To be fair, throughout linguistic history it was easier to appeal to an audience’s innate sense of authority by “sounding smart”. Actually being smart in using the written word to hone the sharpness of a penetrating idea is not particularly evident in LLM’s to date.
If you're using it to write in programming language, you often actually get something that runs (provided your specifications are good - or your instructions for writing the specifications are specific enough!) .
If you're asking for natural language output ... yeah... you need to watch it like a hawk by hand - sure. It'd be nice if there was some way to test-suite natural language writing.
The tests were even worse. They exercised the code, tossed the result, then essentially asserted that true was equal to true.
When I told it what was wrong and how to fix it, it instead introduced some superfluous public properties and a few new defects without correcting the original mistake.
The only code I would trust today's agents with is so simple I don't want or need an agent to write it.
I think it depends on what models you are using and what you're asking them to do, and whether that's actually inside their technical abilities. There are not always good manuals for this.
My last experience: I asked claude to code-read for me, and it dug out some really obscure bugs in old Siemens Structured Text source code .
A friend's last experience: they had an agent write an entire Christmas-themed adventure game from scratch (that ran perfectly).
> there is - in my observational opinion - a rather dark and insidious slant to it
That feels too authentic and personal to be any of the current generation of LLMs.
But yes the current commercial ones are somewhat controllable, much of the time.
In Menlo font (Chrome on Mac's default monospace font, used for HN comments) em-dash(—) and en-dash (–) use the same glyph, though.
It gives a vibe like a car salesman and I really dislike it and personally I consider it a very bad writing style for this very reason.
I do very much prefer LLMs that don't appear to be trained on such data or try to word questions a lot more to have more sane writing styles.
That being said it also reminds me of journalistic articles that feel like the person just tried to reach some quota using up a lot of grand words to say nothing. In my country of residence the biggest medium (a public one) has certain sections that are written exactly like that. Luckily these are labeled. It's the section that is a bit more general, not just news and a bit more "artsy" and I know that their content is largely meaningless and untrue. Usually it's enough to click on the source link or find the source yourself to see it says something completely different. Or it's a topic that one knows about. So there even are multiple layers to being "like LLMs".
The fact that people are taught to write that way outside of marketing or something surprises me.
That being said, this is just my general genuine dislike of this writing style. How an LLM writes is up to a lot of things, also how you engage with it. To some degree they copy your own style, because of how they work. But for generic things there is always that "marketing talk" which I always assumed is simply because the internet/social media is littered with ads.
Are Kenyans really taught to write that way?
I’m highly skeptical. At one point the author tries to argue this local pedagogy is downstream of “The Queen’s English” & British imperial tradition, but modern LLM-speak is a couple orders of magnitude closer in the vector space to LinkedIn clout-chasing than anything from that world.
Here are some random examples from one of the (at least) half-dozen LLM-co-written posts that rose high on the front page over the weekend:
https://blog.canoozie.net/disks-lie-building-a-wal-that-actu...
You write a record to disk before applying it to your in-memory state. If you crash, you replay the log and recover. Done. Except your disk is lying to you.
This is why people who've lost data in production are paranoid about durability. And rightfully so.
Why this matters: Hardware bit flips happen. Disk firmware corrupts data. Memory busses misbehave. And here's the kicker: None of these trigger an error flag.
Together, they mean: "I know this is slower. I also know I actually care about durability."
This creates an ordering guarantee without context switches. Both writes complete before we return control to the application. No race conditions. No reordering.
... I only got about halfway through. This is just phrasing, forget about the clickbaity noun-phrase subheads or random boldface.
None of these are representative (I hope!) of the kind of "sophisticated" writing meant to reinforce class distinctions or whatever. It's just blech LinkedIn-speak.
Outrage mills mill outrage. If it wasn't this, it would be something else. The fact that the charge resonated is notable. But the fact that it exists is not.
They have to actually read material, and not just use the structure as a proxy for ability.
‘Striding’ is ‘purposeful’; ‘trudging’ expresses ‘weariness’; ‘ambling’ implies ‘nonchalance’.
Good verb choice reduces adverb dependence.
Earlier today I stumbled upon a blog post that started with a sentence that was obviously written by someone with a slavic background (most writers from other language families create certain grammatical patterns when writing in another language, e.g. German is also quite typical). My first thought was "great, this is most likely not written by a LLM".
Authenticity, wether it is sincere or not, can become an incredibly powerful force now and then. Regardless of AI, the communication style in tech, and overall, was bound to go back to basics after the hacker culture of the post-dotcom era morphed, in the 2010s, into the corporatism they were fighting to begin with, yet again.
Omitting articles? To me, that has always signaled "this will be an interesting and enlightening read, although terse and in need of careful thought." I've found sites from that part of the Internet to be very useful for highly technical and obscure topics.
I would not want to be an artist in the current environment, it’s total chaos.
Social media artists, gallery artists and artists in the industry (I mean people who work for big game/film studios, not industrial designers) are very different groups. Social media artists are having it the hardest.
But yeah, I definitely find mild grammatical quirks expected from English as a foreign language speakers a positive these days, because the writing appears to reflects their actual thoughts and actual fluency.
Perhaps the US-centric "optimization" of English is to blame here, since it is so obvious in regular US media we all consume across the planet, and is likely the contrasting style.
Some people are perhaps overly focussed on superficial things like em-dashes. The real tells for ChatGPT writing are more subtle -- a tendency towards hyperboly (it's not A, it's [florid restatment of essentially A] B!), a certain kind of rhythym, and frequently a kind of hard to describe "emptiness" of claims.
(LLMs can write in mang styles, but this is the sort of "kid filling out the essay word count" style you get in chatgpt etc by default.)
Now, please, divulge your secret--your verbal nectar, if you wish--so that I too can flower in your tounge!
For sure he describes an education in English that seems misguided and showy. And I get the context - if you don't show off in your English, you'll never aspire to the status of an Englishman. But doggedly sticking to anyone's "rules of good writing" never results in good writing. And I don't think that's what the author is doing, if only because he is writing about the limitations of what he was taught!
So idk maybe he does write like ChatGPT in other contexts? But not on this evidence.
I have seen people use "you're using AI" as a lazy dismissal of someone else's writing, for whatever reasons. That usually tells you more about the person saying it than the writing though.
That's just sad. I really feel for this author.
LLMs - like all tools - reduce redundant & repetitive work. In the case of LLMs it’s now easy to generate cookie cutter prose. Which raises the bar for truly saying something original. To say something original now, you must also put in the work to say it in an original way. In particular by cutting words and rephrasing even more aggressively, which saves your reader time and can take their thinking in new directions.
Change is a constant, and good changes tend to gain mass adoption. Our ancestors survived because they adapted.
The exact same problem exists with writing. In fact, this problem seems to exist across all fields: science, for example, is filled with people who have never done a groundbreaking study, presented a new idea, or solved an unsolved problem. These people and their jobs are so common that the education system orients itself to teach to them rather than anyone else. In the same way, an education in literature focused on the more likely traits you’ll need to get a job: hitting deadlines, following the expected story structure, etc etc.
Having confined ourselves to a tiny little box, can we really be surprised that we’re so easy to imitate?
Because while people OBVIOUSLY use dashes in writing, humans usually fell back on using the (technically incorrect) hyphen aka the "minus symbol" - because thats whats available on the keyboards and basically no one will care.
Seems like, in the biggest game of telephone called the internet, this has devolved into "using any form of dash = AI".
Great.
- Barely literate native English speakers not comprehending even minimally sophisticated grammatical constructs.
- Windows-centric people not understanding that you can trivially type em-dash (well, en-dash, but people don’t understand the difference either) on Mac by typing - twice.
Wow, you really do under/over estimate some of us :)
I also love and use em-dashes regularly. ChatGPT writes like me.
Beyond these surface level tells though, anyone who's read a lot of both AI-unassisted human writing as well as AI output should be able to pick up on the large amount of subtler cues that are present partly because they're harder to describe (so it's harder to RLHF LLMs in the human direction).
But even today when it's not too hard to sniff out AI writing, it's quite scary to me how bad many (most?) people's chatbot detection senses are, as indicated by this article. Thinking that human writing is LLM is a false positive which is bad but not catastrophic, but the opposite seems much worse. The long term social impact, being "post-truth", seems poised to be what people have been raving / warning about for years w.r.t other tech like the internet.
Today feels like the equivalent of WW1 for information warfare, society has been caught with its pants down by the speed of innovation.
Or rather by the slowness of regulation and enforcement in the face of blatant copyright violation.
We've seen this before, for example with YouTube, which became the go-to place for videos by allowing copyrighted material to be uploaded and hosted en masse, and then a company that was already a search engine monopoly was somehow allowed to acquire YouTube, thereby extending and reinforcing Google's monopolization of the web.
Just recently I was amazed with how good text produced by Gemini 3 Pro in Thinking mode is. It feels like a big improvement, again.
But we also have to honest and accept that nowadays using a certain kind of vocabulary or paragraph structure will make people think that that text was written by AI.
> Your kernel is actually being very polite here. It sees the USB reader, shakes its hand, reads its name tag… and then nothing further happens. That tells us something important. Let’s walk this like a methodical gremlin.
It's so sickly sweet. I hate it.
Some other quotes:
> Let’s sketch a plan that treats your precious network bandwidth like a fragile desert flower and leans on ZFS to become your staging area.
> But before that, a quick philosophical aside: ZFS is a magnificent beast, but it is also picky.
> Ending thought: the database itself is probably tiny compared to your ebooks, and yet the logging machinery went full dragon-hoard. Once you tame binlogs, Booklore should stop trying to cosplay as a backup solution.
> Nice, progress! Login working is half the battle; now we just have to convince the CSS goblins to show up.
> Hyprland on Manjaro is a bit like running a spaceship engine in a treehouse: entirely possible, but the defaults are not tailored for you, so you have to wire a few things yourself.
> The universe has gifted you one of those delightfully cryptic systemd messages: “Failed to enable… already exists.” Despite the ominous tone, this is usually systemd’s way of saying: “Friend, the thing you’re trying to enable is already enabled.”
You can check both in ChatGPT settings.
I just checked settings, apparently I had it set to "nerdy," that might be why. I've just changed it to "efficient," hopefully that'll help.
e.g. > [...] and there is - in my observational opinion - a rather dark and insidious slant to it
Let's leave it at "insidious" and "in my opinion". Or drop "in my opinion" entirely, since it goes without saying.
Just take one dip and end it.
It feels very natural to me. But if everyone and their mother considers it a "giveaway", I'd be a fool not to consider it. * sigh *
Besides, of course what people write will sound as LLMs, since LLMs are trained on what we've been writing on the internet... For us who've been lucky and written a lot and are more represented in the dataset, the writings of LLMs will be closer to how we already wrote, but then of course we get the blame for sounding like LLMs, because apparently people don't understand that LLMs were trained on texts written by humans...
And guess what, when you revise something to be more structured and you do it in one sitting, your writing style naturally gravitates towards the stuff LLMs tend to churn out, even if with less bullet points and em dashes (which, incidentally, iOS/macOS adds for me automatically even if I am a double-dash person).
Unfortunately I think posts like this only seem to detract from valid criticisms. There is an actual ongoing epidemic of AI-generated content on the internet, and it is perfectly valid for people to be upset about this. I don't use the internet to be fed an endless stream of zero-effort slop that will make me feel good. I want real content produced by real people; yet posts like OP only serve to muddy the waters when it comes to these critiques. They latch onto opinions of random internet bottom-feeders (a dash now indicates ChatGPT? Seriously?), and try to minimise the broader skepticism against AI content.
I wonder whether people like the Author will regret their stance once sufficient amount of people are indoctrinated and their content becomes irrelevant. Why would they read anything you have to say if the magic writing machine can keep shitting out content tailored for them 24/7?
> TECHNICAL DIFFICULTIES PLEASE STAND BY
This actually made me pee myself out loud!
Seeing a project basically wrapping 100 lines of code with a novel length README ala 'emoticon how does it compare to.. emoticon'-bla bla really puts me off.
In comparison, I can sort of confidently ask GPT-5.1/2 to say "revise this but be terse and concise about it" and arrive at something that is more structured that what I input but preserves most of my writing style and doesn't bore the reader.
AI companies and some of their product users relentlessly exploit the communication systems we've painstakingly built up since 1993. We (both readers and writers) shouldn't be required to individually adapt to this exploitation. We should simply stop it.
And yes, I believe that the notion this exploitation is unstoppable and inevitable is just crude propaganda. This isn't all that different from the emergence of email spam. One way or the other this will eventually be resolved. What I don't know is whether this will be resolved in a way that actually benefits our society as a whole.
It would be ironic and terrific if AI causes ordinary Americans to devote more time to evaluting their sources.
All the toil of word-smithing to receive such an ugly reward, convincing new readers that you are lazy. What a world we live in.
I think the only solution to this is, people should simply not question AI usage. Pretence is everywhere. Face makeup, dress, the way you speak, your forced smile...
> You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.
This is :
> humanity is now defined by the presence of casual errors, American-centric colloquialisms, and a certain informal, conversational rhythm
And once you start noticing the 'threes', it's fun also.
Humanity has always been about errors.
Kenya writes like the British taught before they left, and necessarily they didn't speak or write how they did.
Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor."
No. No no no. The next word is "mat"!
How do you like that, Mr. Rat
Thought the Cat.
I don't know the author of this article and so I don't know whether I should feel good or bad about this. LLMs produce better writing than most people can and so when someone writes this eloquently, then most people will assume that it's being produced by LLM. The ride in the closed horse carriage was so comfortable it felt like being in a car and so people assumed it was a car. Is that good? Is that bad?
Also note that LLMs are now much more than just "one ML model to predict the next character" - LLMs are now large systems with many iterations, many calls to other systems, databases, etc.
I really don’t think that is what most normal people assume… And while LLMs can definitely produce more grammatically accurate prose with probably a wider vocabulary than the average person, that doesn’t necessarily mean it’s good writing…
I mean look at two of us - I have typos, I use half broken english, I'm not good in doing noun articles, my vocabulary is limited, I don't connect sentences well, you end sentences with "..." and then you start sentence with "And", etc. I very much believe you are a real person.
I'm not Kenyan, but I was raised in a Canadian family of academics, where mastering thoughtful – but slightly archaic – writing was expected of me. I grew up surrounded by books that would now be training material, and who's prose would likely now be flagged as ChatGPT.
Just another reason to hate all this shit.
Interesting, because he failed me too just because I use Firefox. Have you been told about the article or it actually worked with your screen reader software?
That would probably mess up any screen reader, but it also didn't work on a regular Firefox :)
No, don't think so. To compensate, I probably missed the article about the obfuscation of kindle ebooks...
Let's say you happen to be lucky, don't accuse someone unfairly, and they are using ChatGPT to write what they said. Who cares?! What is it you're doing by "calling them out" ? Winning internet points? Feeling superior? Fixing the world?
People who want to read thoughts of other people and not meaningless slop.
OK but come ON, that has to have been deliberate!
In addition to the things chatbots have made clichés, the author actually has some "tells" which identify him as human more strongly. Content is one thing. But he also has things (such as small explanations and asides in parentheses, like this) which I don't think I've EVER seen an instruction-tuned chatbot do. I know I do it myself, but I'm aware it's a stylistic wart.
The models mostly say "mat".
On that regard, I have an anecdote not from me, but from a student of mine.
One of the hats I wear is that of a seminary professor. I had a student who is now a young pastor, a very bright dude who is well read and is an articulate writer.
"It is a truth universally acknowledged" (with apologies to Jane Austen) that theological polemics can sometimes be ugly. Well, I don't have time for that, but my student had the impetus (and naiveté) of youth, and he stepped into several ones during these years. He made Facebook posts which were authentic essays, well argued, with balanced prose which got better as the years passed by, and treating opponents graciously while firmly standing his own ground. He did so while he was a seminary student, and also after graduation. He would argue a point very well.
Fast forward to 2025. The guy still has time for some Internet theological flamewars. In the latest one, he made (as usual) a well argued, long-form Facebook post, defending his viewpoint on some theological issue against people who have opposite beliefs on that particular question. One of those opponents, a particularly nasty fellow, retorted him with something like "you are cheating, you're just pasting some ChatGPT answer!", and pasted a screenshot of some AI detection tool that said that my student's writing was something like "70% AI Positive". Some other people pointed out that the opponent's writing also seemed like AI, and this opponent admitted that he used AI to "enrich" some of his writing.
And this is infuriating. If that particular opponent had bothered himself to check my student's profile, he would have seen that same kind of "AI writing" going on back to at least 2018, when ChatGPT and the likes were just a speck in Sam Altman's eye. That's just the way my student writes, and he does in this way because the guy actually reads books, he's a bonafide theology nerd. Any resemblance of his writing to a LLM output is coincidence.
In my particular case, this resonated with me because as I said, I also tend to write in a way that would resemble LLM output, with certain ways to structure paragraphs, liberal use of ordered and unordered lists, etc. Again, this is infuriating. First because people tend to assume one is unable to write at a certain level without cheating with AI; and second, because now everybody and their cousin can mimic something that took many of us years to master and believe they no longer need to do the hard work of learning to express themselves on an even remotely articulate way. Oh well, welcome to this brave new world...
Basically, for two reasons:
1) A giant portion of all internet text was written by those same folks. 2) Those folks are exactly the people anyone would hire to RLHF the models to have a safe, commercially desirable output style.
I am pretty convinced the models could be more fluent, spontaneous and original, but then it could jeopardize the models' adoption in the corporate world, so, I think the labs intentionally fine-tuned this style to death.
Wanna submit a proof in a criminal case? Better be ready to debunk whether this was made with AI.
AI is going to fuck everything up for absolutely no reason other than profit and greed and I can't fucking wait
I regularly find myself avoiding the use of the em-dash now even though it is exactly what I should be writing there, for fear of people thinking I used ChatGPT.
I wish it wasn't this way. Alas.
Thankfully, no one I report to internally wants me to simplify my English to prevent LLM accusations. The work I do requires deliberate use of language.
The other day I saw and argued with this accusation by a HN commenter against a professional writer, based on the most tenuous shred of evidence: https://news.ycombinator.com/item?id=46255049
chatgpt revolutionized my work because it makes creating those bland texts so much easier and fast. it made my job more interesting because i don't have to care about writing as much as before.
to those who complain about ai slop, i have nothing to say. english was slop before, even before ai, and not because of some conspiracy, but because the gatekeepers of journals and scientific production already wanted to be fed slop.
for sure society will create others, totally idiosyncratic ways to generate distinction and an us vs others. that's natural. but, for now, let's enjoy this interregnum...
What LLMs also do though, is use em-dashes like this (imagine that "--" is an em-dash here): "So, when you read my work--when you see our work--what are you really seeing?"
You see? LLMs often use em-dashes without spaces before and after, as a period replacement. Now that is only what an Oxford professor would write probably, I've never seen a human write text like that. So those specific em-dashes is a sure sign of a generated slop.
>You see? LLMs often use em-dashes without spaces before and after, as a period replacement.
It would not make any sense at all to use periods in the places where those em-dashes are supposedly "replacing" periods in the example.
(Not that I used n- or m- dash previously, I used commas, like this! )
But some people learn n- and m-dash, it turns out. Who knew!
Evidently, you've never read text from anyone whose job requires writing, publishing, and/or otherwise communicating under rules established in (e.g.) the Chicago Manual of Style.