/Deslop
10 points
1 hour ago
| 8 comments
| tahigichigi.substack.com
| HN
piker
19 minutes ago
[-]
Just don't use LLMs to generate text you want other humans to read. Think and then write. If it isn't worth your effort, it certainly isn't worth your audience's.
reply
varjag
12 minutes ago
[-]
I would also point to a human-generated (and maintained) list:

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

reply
fxwin
18 minutes ago
[-]
> The elephant in the room is that we’re all using AI to write but none of us wants to feel like we’re reading AI generated content.

My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably on par for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.

The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text

reply
idop
3 minutes ago
[-]
Indeed. I have never used an LLM to write. And coding agents are terrible at writing documentation, it's just bullet points with no context and unnecessary icons that are impossible to understand. There's no flow to the text, no actual reasoning (only confusing comments about changes made during the development that are absolutely irrelevant to the final work), and yet somehow too long.

The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.

reply
stuaxo
18 minutes ago
[-]
This article itself feels LLM written.
reply
cadamsdotcom
6 minutes ago
[-]
There’s a really cool technique Andrew Ng nicknamed reflection, where you take the AI output and feed it back in, asking the model to look at it - reflect on it - in light of some other information.

Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.

reply
Leynos
28 minutes ago
[-]
Please try and do these, because there's nothing more annoying than some comic book guy wannabe moaning about AI tells while I'm trying to enjoy the discussion.
reply
randomtoast
9 minutes ago
[-]
You just need to use this list as a prompt and instruct the LLM to avoid this kind of slop. If you want to be serious about it, you can even use some of these slop detectors and iterate through a loop until the top three detectors rate your text as "very likely human."
reply
Der_Einzige
24 minutes ago
[-]
We wrote the paper on deslopping LLM and their outputs: https://arxiv.org/abs/2510.15061
reply