A Word, Please: Coffee-shop prompt stirs ChatGPT to brew up bland copy
67 points
4 months ago
| 17 comments
| latimes.com
| HN
haburka
4 months ago
[-]
I appreciate this critique of chat GPT’s writing style since I have often been bothered by it but have struggled to define what was wrong. I think the main feeling I have is that the words are there to make a sentence, the sentence is there to make a paragraph, whereas any decent human writer uses each word to communicate and entertain. The AI does not have nearly enough nuance in order to make high quality writing.

Eventually some AI models may be able to do this but they won’t just be an LLM. Some kind of intelligence needs to exist that attempts to understand underlying fundamentals rather than just finding the most likely next word.

reply
andy_xor_andrew
4 months ago
[-]
the style of ChatGPT always reminds me of bad fanfiction. Which I guess makes sense. There is a lot of bad fanfiction floating around on the web.

The hallmarks of this style are easy to spot. Lots of vacuous sentences, which (usually) make grammatical sense but no logical sense. It's very "first-draft-esque". Lots of sentences which, if read a second time, signify nothing, or otherwise just don't hold up to scrutiny.

reply
cratermoon
4 months ago
[-]
The term of art in the writing/editing business is "throat-clearing". A lot of words that don't say much before the action really starts.
reply
HeatrayEnjoyer
4 months ago
[-]
Everyone misses the fact that ChatGPT's style is intentional, to make it obvious that ChatGPT wrote it.

Before RLHF they don't sound like that at all.

reply
OrigamiPastrami
4 months ago
[-]
Is there an LLM I can interact with right now that doesn't sound like a chat bot? I'm very interested in a chat bot that sounds like a person - hell I'd pay money for it.
reply
maeil
4 months ago
[-]
Out of the big 4, Gemini sounds the least like a chatbot, GPT the most. Claude and LLAMA are somewhere inbetween.
reply
stavros
4 months ago
[-]
There are thousands of LLMs, tuned for different purposes. You could also probably prompt GPT in the right way and achieve that.
reply
OrigamiPastrami
4 months ago
[-]
No, I cannot prompt ChatGPT and get a response that sounds human. You may think that, but in my experience ChatGPT is fucking awful and I was asking if there an LLM that doesn't sound obnoxious.

Obviously this is all subjective, but OP implied it would be easier to make an LLM that doesn't sound like a stochastic parrot.

reply
stavros
4 months ago
[-]
Try Llama 7B, or one of the Mistrals.
reply
OrigamiPastrami
4 months ago
[-]
Thank you for the reply - I'll give them a shot.
reply
stavros
4 months ago
[-]
You're welcome! Generally, I've found that the smaller models sound more human, or at least more creative.
reply
bartvk
4 months ago
[-]
Why would you struggle to define what's wrong? It's overly verbose and weirdly flowery.

I wish LLMs were better. But currently, when I read that drivel, I feel repulsed.

reply
grues-dinner
4 months ago
[-]
I think they write like this partly because most of a generation or more of bad "copywriters" have been educated by teachers, other bad/amateurish writing, SEO BS artists, and even their own clients that this is good writing. Billions of words of this dreck fills the Internet. Everything is an amorphous "experience", is "indulgent" and "unforgettable" and comes in "bespoke packages". Everyone is generically "committed" and "passionate" to their "dream". Read the blurb on any packaging and it's the same empty, flowery pap.

It sounds good on a surface level, but it's empty linguistic calories. It's Instagram writing. Lord Dorwin would write like that.

It's also tricky to prompt an LLM as you would an (engaged) human copywriter, because the human has extra cues like "why am I being asked to do this anyway", "who asked me" and "what about this coffeeshop is special but isn't in or isn't emphasised by the prompt/original copy". Plus extra research they may do. By the time you've prompted it carefully enough, and verified the emphasis is where you want it, you're basically just writing it.

Of course what the LLM will do here is the same crap job that you can get from a guy of Fiverr that they bang out in a few minutes. Because that's where most copy comes from in the first place and that's the training data. And the LLM and the Fiverr guy have the same lack of investment in the job. And, as the Internet, full of the breathless writing about pasionate, committed experience packages proves, that's good enough for most people.

It's a bit like smartphone cameras wrecking the low-end camera market. If you want to sell standalone cameras now, you have to sell very good cameras.

reply
bqmjjx0kac
4 months ago
[-]
It's just a new flavor of uncanny valley IMO, but this time it's an approximation of communication/language. I am in awe of the technological breakthroughs that brought us this far, but disappointed in how it's being intentionally overhyped to devalue human experts like writers and software engineers.
reply
SatvikBeri
4 months ago
[-]
There's a lot of very verbose and flowery writing that's much better than ChatGPT, e.g. anything by Dickens.
reply
ZiiS
4 months ago
[-]
LLM's target average; 50% of human writing is above average, not just one in a generational talents.
reply
bryanrasmussen
4 months ago
[-]
Overly verbose and weirdly flowery is a style, in some places and times even the default style.

However when I read the overly verbose and weirdly flowery text of the late 19th century, as an example, it feels different than the weirdly flowery press release style LLMs favor.

reply
grues-dinner
4 months ago
[-]
It's a different thing entirely. The former style, in which, I must confide to you, my dear fellow, I have always found an inordinate amount of pleasure in reading, risible though you may reasonably find it to be so, written as it often is in a complex form that my erstwhile English masters would likely consider gauche and lacking in focus or, worse, perhaps even damned as trending to running-on, does rather transport my mind to the place and time of writing in a way that mere graven images, despite their undeniable detail and modern polychrome technologies, do not.

Perhaps it's a result that writing and reading were both a mark of education and more complex writing was valued over a quick parse by the reader, which is highly valued today. Or, maybe, the use of hand- or type-writing led to train-of-thought subclause-rich writing, because it's hard to recast the sentence without a cursor, and one iteration of fair copying was probably enough for most purposes. Furthermore, much less writing was even done, far fewer people did it and it was expensive and laborious to do, transmit and reproduce. And what was written was often not written by, or intended for reading by, people without expensive educations.

Most people probably read a lot more text now than most people did back then, and the do it faster and more disposably. Things these days are often written to be quickly scanned, perhaps only by automatic search crawlers, have keywords recognised and pulled out then the chaff is discarded. Which works better with short, simple sentences. And prominent keywords are important, leading to self-reinforcement of both a set of shibboleth words that "all good copy" should have and how and when those special words are to be used.

reply
bartvk
4 months ago
[-]
Exactly. Upon smelling the LLM excretion, I immediately get this feeling that I was tricked and I get a bit pissed off.
reply
Zondartul
4 months ago
[-]
My hunch is that since LLMs are trained on a per word basis (okay, per-token), vacuus verbosity is overrepresented.

If you have one normal sentence and one overly verbose, the latter will have more tokens and therefore more weight.

reply
joegibbs
4 months ago
[-]
Now there's a benefit of AI that most people probably haven't considered: give it a few years and I bet you'll start seeing way less vacuous, cliched corporate writing once people have read enough AI output that they'll start identifying it with AI - even if it's written by a human.

Also, it seems like the reason that it does it is because you're trying to convert a couple of sentences into a full blog post/press release/article. All it has to go off is the input sentence - unlike an actual writer who (probably) has all the details about the subject. If you don't have the details and the result must be at least X length, then you're going to get a ton of useless padding around the only details that you've given it, because the only other thing that it could do would be making things up.

reply
1123581321
4 months ago
[-]
It’s a little better than that as you can just give it your website or drop in some other articles for reference and it’ll pull out details and contextual use with its general knowledge. I’m hoping for the same thing, though. If the prompt is what you really want to say, said plainly, just post that!
reply
cam_l
4 months ago
[-]
>the prompt is what you really want

It crosses my mind every time i realise I am in the midst of some verbose, contentless, llm drivel, that i would prefer to read the prompt than the garbage that gets spat out.

There are a few tools floating around that do prompt extraction. I must admit i haven't gotten around to trying them out, but I have been toying with an idea to make a local browser plugin to strip a page down to its prompts and maybe find out why I was there in the first place.

reply
pishpash
4 months ago
[-]
That's why you use it for summarization or transformation not expansion.
reply
pen2l
4 months ago
[-]
I have a mathematician friend who studied mathematics at MIT and then worked in a research team at Google for some time. I caught up with him, and casually half-jokingly told him that I tried to understand some of his papers but I got nothing out of them.

And he replied, in earnest, that some of the papers are needlessly and deliberately complicated and obfuscated for a whole host of reasons (political reasons, career reasons etc).

This jargon-heavy circumlocutious academic speak you find often in papers, it's a gate-barrier... to keep out non-native English speakers, up-and-coming scholars? I don't know, but whatever the case, let's hope the outflow of papers written with LLM assistance takes a blow at the problem, when the corpus of literature is so diluted with all this that corporate-speak and jargon-heavy speak as a differentiator for in-group or whatever has a lesser incentive to exist.

https://paulgraham.com/simply.html

reply
bee_rider
4 months ago
[-]
Academic language can be a bit jargony and tedious. But also, academics are people too. Is it at all possible that your friend was writing with an assumed audience of other experts and wanted to mitigate the extent to which your inability to get anything from the papers made you feel bad?
reply
nick3443
4 months ago
[-]
Ironically, one of the things llms are actually decent at is helping remove jargon and summarize text
reply
Sateeshm
4 months ago
[-]
It's a hit or a miss. In my experience, when they summarise they struggle to understand nuances and group things that seem similar in nature but really aren't.
reply
pishpash
4 months ago
[-]
Certainly Gemini for Google suites has been useless so far.
reply
mrweasel
4 months ago
[-]
Is ChatGPT really competing with a good professional copy writer or is it competing with "Whatever Carol from the in-house marketing department made up"?

It is my opinion that most companies and government organisation put very little thought into writing and communication. Organisations will have large communications departments and yet they repeatably fail to communicate in a clear and easy to read language. Often there will multiple spelling error, despite of spellcheckers being almost as old as computers, but people can't use them or trust them blindly. Sentence makes no sense, or as in the article, they are empty and filled with cliches, and this is from professional communication departments.

Frequently my municipality will send out information that is either not required, full of factual errors, poor wording and completely fail to provide you with enough details for you to actually act.

Communications is difficult and yet it is given very little attention in modern organisations. ChatGPT won't work for people who already take writing serious, for everyone else, it's probably not far off or at least not significantly worse.

reply
saaaaaam
4 months ago
[-]
I had first hand experience of this recently. A tenant in a property I rent out was sent a near-incomprehensible email written in “building manager” gobbledegook by the agency who manage the property. “I would be most grateful if you can either facilitate the release of keys in order for the inspection to be undertaken on the date in question or otherwise revert notifying me of alternative arrangements for access”.

This, translated, is “Please can you either leave a set of keys with the front desk on Monday, or let me know when you will be home so that we can knock on the door.”

I think this is the sort of thing that AI actually CAN help with: give the product details of what needs to happen and it can write a (hilariously) a more “human-sounding” email vs this sort of dreadful “property agent with a stick up his ass” email.

reply
3eb7988a1663
4 months ago
[-]
Government failures in communication is a huge bugbear of mine. I have multiple degrees, and I am routinely stymied by the intention of some government forms or signage. Is anyone involved wondering how the verbiage will be interpreted by an ordinary person? How is this text supposed to be accessible to someone who is ESL or otherwise of a mediocre level of literacy?
reply
karaterobot
4 months ago
[-]
Yes, but here's the problem: most web writing by humans is tepid and bland as well. The reason for it may be different, but the effect the same. Human content-fillers get requests to bang out a few content units a day, so they do it. Few if any of these are good. Why would they be, that'd be beside the point. They exist to trick search engines, that's it. A user clicks a search result, and scans the page long enough to realize they've been duped. I'm all for paying writers, which is why I buy books, and subscribe to magazines, substacks, and patreons. But I'm not going to exert myself to protect the unhallowed institution of SEO blogspam production.
reply
Ylpertnodi
4 months ago
[-]
>Yes, but here's the problem: most [web] writing by humans is tepid and bland as well. The reason for it may be different, but the effect the same.

I write at a much lower level than what i did do, cus a) my audience can't seem to comprehend even words like 'comprehend ' and 2) it's not worth getting in trouble at work, where d( apparently i work with 'the finest'.

reply
lelandfe
4 months ago
[-]
I respect how much this comment had to lean into the errors to avoid it just looking accidental
reply
internet2000
4 months ago
[-]
The copyrighter’s text is 5x better at 50x the cost. I can tell where this is going.
reply
Spivak
4 months ago
[-]
Yep, the AI version is exactly as vacuous as every article in my local newspaper. I appreciate the craft of writing and I don't doubt humans will do it better forever but for marketing copy the bulk of what's being asked for pays no mind to words anyone would want to read.

I wish the result is that anything you would ask AI to write like this people would just replace with nothing, it clearly doesn't matter if it's good or interesting.

reply
layer8
4 months ago
[-]
If good writing becomes scarcer, it will also become more valuable and sought after.
reply
rchaud
4 months ago
[-]
...all other things remaining equal, such as the economics of the industry that actually rewards good writing. Currently it's geared towards quantity over quality and towards the quick, formulaic and repeatable.
reply
santoshalper
4 months ago
[-]
Enshitification, baby. Everything gets quite a lot worse, but much cheaper. Soon, we'll have all the bullshit we could ever want!
reply
sandspar
4 months ago
[-]
These "outsiders critique AI" articles never include their prompt. If you blindly copy and paste a brick of text into ChatGPT and ask it to "write an ad about this" then you can expect junk.
reply
xelamonster
4 months ago
[-]
Don't you think that's exactly what a manager looking to replace human writers with AI is going to do though? If they don't want to pay a copywriter they certainly won't be hiring a "prompt engineer". Most GPT generated articles on the internet look exactly like what they produced here, and that'll be the case as long as getting something halfway decent requires hundreds to thousands of words that boil down to "please make this actually good".
reply
boredemployee
4 months ago
[-]
Man, exactly that. I use GPT a lot and I know what to expect in most cases, it won't solve all your problems, it probably will solve a few and not in the first iteration.
reply
voiper1
4 months ago
[-]
Sounds just like the kind of long form articles I keep seeing links to before ChatGPT that I decided to stop reading. It's giving back what it's been trained on :shrug:
reply
kmoser
4 months ago
[-]
For every author like that one who is more capable than ChatGPT, there are many others who are less capable and who will fall back on ChatGPT (and other LLMs) to boost the quality of their output.
reply
Mistletoe
4 months ago
[-]
“With a steadfast commitment to quality and community, Caffe Maximo’s expansion into Redondo Beach symbolizes its dedication to fostering local connections.”

We must do whatever we can to avoid a future where we read AI crap like this forever. It’s already seeping in everywhere. It is like reciting the phone book.

reply
nick3443
4 months ago
[-]
You will have to pay for premium AI feature in your browser to wade through the sea of shit and keep most of it from reach your eyes!
reply
CityOfThrowaway
4 months ago
[-]
Most of this critique hinges on the author not being skilled in the practice of using LLMs to produce good writing outputs.

ChatGPT is a bad writer, and is the wrong tool for copy writing. That is a consequence of its post-training, though. I don't expect this to be true forever.

Using Claude or Writer would produce better outputs by an unskilled hand. And even with ChatGPT it's possible to get good outputs if you know what you're doing.

This implies that perhaps there will still be a role for copywriters as skilled users of LLMs. Though, I think it's more likely that the service of copywriting will transform copywriting products that centralize LLM know-how and expose simple self-service tools.

reply
Fricken
4 months ago
[-]
You can ask Chat GPT to rewrite things in the voice of a particular author, or with other style directions. There are no guarantees it will follow your directions well, but you certainly aren't stuck with Chat GPT's default writing style.
reply
CityOfThrowaway
4 months ago
[-]
Yup! That said, it is heavily biased in favor of meaninglessly metaphorical language and comma splices.

I have a feeling that the early style guides were weighted heavily in favor of poetic forms because it was extremely impressive to see AI create poetry.

But now we are used to it and we just want effective, boring writing!

reply
antimemetics
4 months ago
[-]
I just tell it to follow a concise style and Federal Plain Language Guidelines. Works great mostly but sometimes it misses some important details
reply
deadeye
4 months ago
[-]
What was the prompt? That makes a hugh difference in the output.

The article doesn't seem to mention how they prompted the llm. Without knowing that, it's impossible to say how well GPT actually did.

reply
yumraj
4 months ago
[-]
I doubt that the ChatGPT version was iterated upon.

You don’t just give some instructions to an LLM and pick up the first version it generates. You work with the LLM to edit and refine till you get a good copy.

Now if you have no idea to begin with, obviously GIGO.

reply
saaaaaam
4 months ago
[-]
This is the thing that a lot of people who dabble with LLMs don’t seem to understand. And obviously it makes great clickbait to rail against chatgpt.
reply
iamflimflam1
4 months ago
[-]
I publish a regular newsletter on maker news - it's a collection of YouTube videos from our group of makers and interesting links that I've collected over the month (thank you Hacker News for being a great source!).

I've experimented with getting ChatGPT to produce the titles and teasers that I write for each video and link.

No matter how I prompt it, and how many examples of the newsletter it is given, it refuses to write in my style. It always falls back into the the ChatGPT style - there are only so many times you want to "dive" into something or "delve" into a topic.

It is quite useful for generating an initial summary, but needs a good editor to turn it into useful copy.

I will occasionally, when really pressed for time, use it for the opening intro, but even then it just sounds like drivel.

reply
saaaaaam
4 months ago
[-]
What’s your prompt though?
reply
stavros
4 months ago
[-]
As I tend to say, LLMs are a quick way to get you to average. For some people, that's an improvement.
reply
oarsinsync
4 months ago
[-]
Up to 50% of people, maybe
reply
lxgr
4 months ago
[-]
Only with some extra assumptions about the shape of the distribution :)
reply
GaggiX
4 months ago
[-]
For context, the article is from April, as such the author is using GPT-3.5.
reply
ZiiS
4 months ago
[-]
GPT-4 launched March 14 2023; an April 2024 artical presumably used it?
reply
GaggiX
4 months ago
[-]
GPT-4 has not been used in the free ChatGPT chat, same thing with GPT-4-Turbo, and I highly doubt that the author has used the pay version.
reply
ZiiS
4 months ago
[-]
I guess to me; the idea that a professional author couldn't get $20/month of value out of ChatGPT is just as silly as expecting one prompt to beat an expert with years of industry experience.
reply
saaaaaam
4 months ago
[-]
I wonder if she’s been replaced by a better prompt by now?
reply
pettycashstash2
4 months ago
[-]
Ai is now part of the workflow, and with that a new job description of getting rid of its hallucinations. Virtuallly Every industry is impacted by this, and we have to adjust our work. I feel bad for companies that don’t realize this- they will pay for when ai hallucinations will drive away customers because they were too cheap to pay for copy review or think that ai will replace common sense.
reply
saaaaaam
4 months ago
[-]
The premise of this article is far more depressing than the “depressing” conclusion that the author seems to propose, which is that journalists are going to be out of jobs.

“Journalist who doesn’t understand technology tells chatgpt ‘write me a 200 word article from this press release’ and rails against bland outputs” is a better headline for her piece.

I partly work in product management at a news company.

I have a “reduce press release to article” prompt that I’ve developed over the past few months. It’s got 5 sections and 63 instructions to handle everything from formatting to tense to attribution to catching hallucinations.

The output of that prompt goes into another that specifically focuses on catching hallucination, paraphrasing and mis-attribution.

That might sound like overkill but it works brilliantly and turns out copy that is often better than the human writers doing the same thing. By taking this off their plate they can focus on high value and interesting writing, not simply regurgitating a press release.

Though as someone else has pointed out, many writers are bland and good for little else than regurgitating press releases. They should find alternative employment fast, because they will be the first writers to feel the true impacts of LLMs.

I didn’t really believe in “prompt engineering” before I worked on this, and regarded it as techbro snake oil akin to “SEO hacking” but going through the process of building out this mini-product (which also looks at incoming press releases and score for relevancy against topic, companies mentioned and people mentioned) has shown me how incredibly important having a well thought out and well structured prompt is. If you’re just asking your LLM to rubber duck with you it’s not important but if you want high quality replicable outputs it’s fundamental.

Yes, if you put a press release into chatgpt and don’t think about the outputs, you’re going to get slop out the other side.

All the more depressing because she’s promoting her book about syntax - and syntax is a core competency for prompt engineering.

Online news is about to be left behind. Yet again.

reply
grizzlychair
4 months ago
[-]
Model fine tuning solves this. When (not if) the LA Times gets tired of paying writers, they will pay someone to tune an LLM using their corpus.
reply
sterlind
4 months ago
[-]
I doubt it. LLMs seem to do a very superficial job generating summaries and reviews, likely because they don't know what to focus on (what information is relevant), and because they don't have the planning ability to structure a narrative. Instead, they tend to generate purple prose with buzzwords and cliches, filling in facts taken from the source material, and problems connecting clauses.

It's a bit like image generation. It looks impressive at first glance, but generic, and when you focus you realize the image doesn't make sense, the details are wrong, the layout is confused.

You can't generate working schematics or infographics with diffusion models - all you can make is filler content. Fine-tuning can give you different art styles, but it can't give you coherence. Fine-tuning can change an LLM's prose style, but it will still generate superficial, vacuous filler prose.

Why would I pay the LA Times for drivel?

reply
santoshalper
4 months ago
[-]
That only works because they have a corpus of high-quality human produced source material. What happens when they over farm the crops and ruin the soil? Perhaps they don't care, but should we?
reply