All my clients wanted a carousel, now it's an AI chatbot
109 points
4 hours ago
| 16 comments
| adele.pages.casa
| HN
mananaysiempre
3 hours ago
[-]
> Then the trend quietly died, as trends do. Not because anyone decided carousels were bad. Just because something newer came along to copy.

> [...]

> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.

> [...]

> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.

> [...]

> No pop-ups. No blinking corners. Just content, clear and immediate.

It’s been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way there’s this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.

And, I mean, none of these is even bad in isolation, but it sure feels like we’re due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.

reply
fallpeak
2 hours ago
[-]
Have courage and trust your own instincts. Unless one is extremely disagreeable it's very tempting to hedge and avoid outright saying "this is AI" just in case you're wrong, but if you're literate and regularly exposed to AI outputs your instincts are likely quite accurate.

In this particular case the linked article is definitely AI generated.

reply
eterm
2 hours ago
[-]
Indeed, consider these two posts linked below also from this blog. They look the same, they maintain the same impersonal writing style. There's no humanity to it at all.

They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.

Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.

This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.

I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.

Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.

Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.

https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...

https://adele.pages.casa/md/blog/finding_flow_in_code.md

reply
joshka
1 hour ago
[-]
Is this comment LLM generated?
reply
fallpeak
1 hour ago
[-]
What does that have to do with anything? These days any piece of text may or may not be AI generated (my money would be heavily on "no" for the post you asked about), but either way it isn't blatant slop so we can't tell.

It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"

reply
mananaysiempre
1 hour ago
[-]
I started off hedging but by the end of the comment came to think that AI use—or lack thereof—was actually beside the point. I have feelings with regards to the situation where “the situation” includes some largely irrelevant-to-writing things like the mainframization and the “feelings” are not nearly coherent enough to graduate to thoughts. Thus (unlike some others) I don’t think that calling out writers or warning readers about AI is all that useful (or for that matter courageous). With respect to writers who use AI due to a lack of confidence, it’s probably even harmful. (Saying that as a person who manages to absolutely suck in embarrassing ways in multiple foreign languages. And also in English but less obviously. And likely in my native language too due to lack of use.) Meanwhile, TFA makes a decent point, and I am in no position to criticize people for being wordy.

The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.

reply
franga2000
3 hours ago
[-]
LLMs don't "own" this writing style. By definition they can't - they were trained on human writing after all! People wrote like this before and that's fine. You might not like the style, but saying it's because LLM writing has infested their brain is wrong, dismissive and dehumanising.
reply
dxdm
2 hours ago
[-]
Any style can cross the border into bad and get in the way of itself when it's turned up to 11, no matter who wrote it.

There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.

Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.

reply
franga2000
2 hours ago
[-]
Yes, definitely, but the parent post was quite explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content.

Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.

Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.

reply
dxdm
1 hour ago
[-]
I think the original comment is much more open-minded towards the author of the TFA than you are to the commenter.

> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content

We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.

> there's no reason to believe the style came from LLMs

They say "might" and "plausibly". I think there's no belief there until you assume it.

And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.

I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.

reply
mananaysiempre
1 hour ago
[-]
Nah, the two possibilities were in fact exclusive in my mind (subject of course to the usual likelihood of any one thing I say being completely wrong, but that’s always in the background and not that useful to constantly point out). And it might be fair to say that it is unwise to attempt this kind of amateur psychoanalysis in public. It’s just that I don’t see being influenced by things you read as a big deal, let alone an accusation, let alone a dehumanizing one. See my neighbouring comment[1] for more on the last point.

[1] https://news.ycombinator.com/item?id=48073567

reply
servo_sausage
2 hours ago
[-]
Only to a limited extent, the fine tuning of these models uses a much smaller more curated set to generate tone and defaults.

The whole corpus is in there, but the standard style is tuned for.

reply
piva00
2 hours ago
[-]
I wonder how much marketing copy has poisoned the "default" writing style of LLMs, it surely has those undertones of pitching a sale in an uncanny valley way.
reply
watwut
2 hours ago
[-]
So I will say that things I read were not written in this style.

And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.

reply
mananaysiempre
2 hours ago
[-]
LLMs don’t own these expressions in the same sense that McDonald’s doesn’t own salt: they are undoubtedly making use of a strong reaction that humans have had—have been having—long before; but they did develop a way to mash that button on an industrial scale like few before them. (With of course a great deal of help from humans, be it via customer surveys or RLHF; or you could call it help from Moloch[1] in that the humans unwittingly or negligently assembled themselves into a runaway optimizer.) So I think it’s fair to say that LLMs do own this style, as in the balance of ingredients, even if they do not own the ingredients themselves. And anyway nothing in the social perception of language cares about fairness: low-class English speakers did not invent negative agreement (“double negatives”), yet it will still sound low-class to you and even me (and my native language requires negative agreement).

As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.

For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)

As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.

Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.

(Not that I claim to be a particularly good writer.)

[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

reply
xtiansimon
49 minutes ago
[-]
> “…there’s this record-scratch feeling…”

The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?

It’s also true llm second drafts are a thing.

And it’s true both can ‘record scratch’ you right out of attention.

As well as the now present trend as readers to be impatient and quickly bored.

And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.

reply
sebzim4500
2 hours ago
[-]
None of that feels like AI smell to me despite the "it's not X it's Y" framing. I can't really explain why though.
reply
delusional
2 hours ago
[-]
None of those 4 look like AI slop to me. They lack the strange non-sequitur nature these contrasting statements generally have when made by AI. The version of the third example I would expect from a clanker would be more like

> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine

Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.

reply
operatingthetan
3 hours ago
[-]
My partner works at a nonprofit and they paid some consultant for a chat bot. The next month they were surprised they got a $2000 bill for the API use and at first wondered if the bot was really popular. The analytics reveled that very few conversations were happening.

The consultants apparently had the bot load and fed it an immediate prompt which greeted the user. This was happening on every page load. Bad consultants, bad bot.

reply
not_that_d
3 hours ago
[-]
The amount of consultants that are very known and have large presence on developer communities and give a lot of talks and have no idea how to approach real world problems is impressive.
reply
raverbashing
3 hours ago
[-]
"Bad consultants" you mean, that's the average consultant
reply
enos_feedler
3 hours ago
[-]
“It's about visibility, the fear of looking behind”

This sums up everything driving the tech sector right now. From execs at big tech to nobodies on X.

EDIT; if I think about the nature of it. The visibility fight is the decreasing attention with increasing channels and noise. Visibility tactics go to the extreme. And the fear of looking behind comes from the previous tech cycles and the thoughts around what if you had missed those? And maybe those with the most fear are the ones that did.

reply
bambax
2 hours ago
[-]
> right now

It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.

Not everything's great today, but it's a little less bad I think.

reply
enos_feedler
2 hours ago
[-]
I don’t know. I think back to my first dialup connection and getting internet for the first time. In no way do I remember fear being a driver. I remember people being curious. Nobody ran around saying you need to get on the internet or you will be left in the dust. Would be curious if anyone had examples of this if I am wrong. Youtube links to old news broadcasts or magazine print ad archive or something.
reply
stuaxo
3 hours ago
[-]
Well, the marketing from the AI companies is working.
reply
enos_feedler
3 hours ago
[-]
Thats the clever nature of the companies. They are playing on peoples fear to drive adoption. Its a bit sickening to me
reply
Thanemate
3 hours ago
[-]
"Adopt or be left behind" and the quality of the thing you're adopting relies heavily on how much training it receives by the users who are scared of being left behind.
reply
grebc
3 hours ago
[-]
It’s FOMO and it works every couple of years because the execs who buy in are different to the last lot of execs who got promoted/canned.
reply
h05sz487b
3 hours ago
[-]
The obvious solution is to implement a mock chatbot that answers from a set of pregenerated wrong answers. Noone will know the difference.
reply
grebc
3 hours ago
[-]
Genius.
reply
halflife
3 hours ago
[-]
These chatbot and google login are my most hated feature of current web.

Obviously it just a script embedded in the page, so it has not actual place in the design. So the effect, especially on mobile, is this dance of starting to read a page, have it obscured by annoying popups, and trying (and failing) to close the popup with the hidden 12x12 pixels x button.

Just like the entire ads market, it’s all forgery to drive up clicks so owners can say to the clients that there is interaction.

Don’t get me started on the recent YouTube ads on iPad that place a banner that sits on top of the video, hiding subtitles, and closing it is behind a menu that requires you to be a brain surgeon specialist in order to interact with, instead of clicking the ad itself. I currently have 15 tabs in safari for ads that I inadvertently clicked.

reply
dbuxton
1 hour ago
[-]
I had the same experience with chatbots, but we shipped a chatbot module a year ago that helps with complex config questions by reading and answering based on a Salesforce Experience site.

I was skeptical but it gets a 68 NPS from users, even if we do get the occasional "why are you investing in AI I hate it" coming through the feedback channel.

As ever, the issue is "what problem are you solving". If it's that you want more people to put their hand up and talk to you/order something, chatbots seem like a bad solution. If it's that you have a ton of complex docs that people have to read in order to implement and use your product, it's not the solution but it's probably part of a solution.

reply
luke5441
1 hour ago
[-]
If you have the docs public assuming a good search engine you don't need the chat bot since users can use e.g. Google AI.
reply
Ozzie-D
2 hours ago
[-]
Same energy as the carousel era. The client doesnt actually want a chatbot, they want to not feel behind. The question nobody asks is 'what would this chatbot actually do that a good FAQ page cant?' and usually the honest anwser is nothing, but it looks modern and thats enough to get through the meeting.
reply
eterm
3 hours ago
[-]
> No pop-ups. No blinking corners. Just content

Your clients seem to have got what they wanted, or at least someone who has learned to write like one.

reply
efilife
3 hours ago
[-]
Come on, this is clearly human-written People have been writing like this for very damn long
reply
eterm
3 hours ago
[-]
It isn't "clearly human-written" at all, the entire blog looks like LLM output, right from the very first post.

I'm not witch-hunting, there are just a lot of witches.

reply
efilife
1 hour ago
[-]
I just went through some of the posts and you are right. It's very suspicious, but I would say it's right at the edge of being plausibly written by a human. If it's LLM, then it's the first one I'm aware of that got me this good. I am usually the first one to point out that something reeks of LLM writing here (which I'm kinda ashamed of, considering how much I've been doing this).

Tbh the whole smolweb concept by this person seemed kinda weird right when I discovered it was a thing. It seems to not really be a thing but the person is really trying to convince you that it is

reply
ludicrousdispla
2 hours ago
[-]
>> A way of saying: we're keeping up.

Back in the day, websites could just put up an animated "under construction" gif.

reply
oezi
1 hour ago
[-]
I mostly agree but some recent experiences with voice chat bots give me pause:

Fedex has now a voice bot when you call and it is kind of good and fast. I mean faster than navigating their website. It picks up directly after some boilerplate. It can understand me.

With website chatbots we could have similar leaps if they are done well and have access to CRM/ERP etc. to actually help you.

reply
try-working
1 hour ago
[-]
I've built chatbot demos for big corps like Walmart and other non-tech brands. What they want is "something that looks AI." The problem with chatbots is they don't work.
reply
wuhhh
3 hours ago
[-]
I stress over this with my own website-for-work. If I make the developer’s version of my site, who am I talking to? Other devs. If I make the version that appeals to agencies and casual users, there’s a constant voice in my head trying to drag me back to something simpler, lighter, judging me for that threejs hero section. As with all things, I guess it’s a matter of finding the right balance. Web development sure is in a very strange place and transitioning hard right now - off topic but I’m seeing more and more people looking for work and fewer and fewer job postings, especially for freelancers like myself. But maybe I’m not advertising AI bot integrations hard enough.
reply
drawfloat
3 hours ago
[-]
Are casual users crying out for ai chat bots? From my experience the only stakeholder pushing for those is the business themselves.
reply
wuhhh
3 hours ago
[-]
By casual users, I mean non technical people who might reasonably be on my website because they’re looking to commission work
reply
pocksuppet
1 hour ago
[-]
Show your clients McMaster-Carr. It's not "simple". It is efficient.
reply
djeastm
1 hour ago
[-]
I love the site, but it's also worth noting that because it is not mobile-friendly it can afford to take full advantage of its efficient catalog nature and not feel the need to make compromises. Sometimes I wish we had said "browsers are for desktops, apps are for tablets/phones" and never tried to combine the two.
reply
Martin_Silenus
1 hour ago
[-]
Girl, give them ELIZA, they won't even notice.
reply
cjs_ac
3 hours ago
[-]
I think an important subtlety here is that clients/‘normies’ look at different websites to us, so the taste in websites that they cultivate is different to ours.
reply
rienbdj
3 hours ago
[-]
Bring back lightbox!
reply