Show HN: Slop or not – can you tell AI writing from human in everyday contexts?
10 points
3 hours ago
| 4 comments
| slop-or-not.space
| HN
I’ve been building a crowd-sourced AI detection benchmark. Two responses to the same prompt — one from a real human (pre-2022, provably pre prevalence of AI slop on the internet), one generated by AI. You pick the slop. Three wrong and you’re out.

The dataset: 16K human posts from Reddit, Hacker News, and Yelp, each paired with AI generations from 6 models across two providers (Anthropic and OpenAI) at three capability tiers. Same prompt, length-matched, no adversarial coaching — just the model’s natural voice with platform context. Every vote is logged with model, tier, source, response time, and position.

Early findings from testing: Reddit posts are easy to spot (humans are too casual for AI to mimic), HN is significantly harder.

I'll be releasing the full dataset on HuggingFace and I'll publish a paper if I can get enough data via this crowdsourced study.

If you play the HN-only mode, you’re helping calibrate how detectable AI is on here specifically.

Would love feedback on the pairs — are any trivially obvious? Are some genuinely hard?

flossposse
10 minutes ago
[-]
By playing this game I'm helping to train AI how to be less detectable?
reply
eigen-vector
2 minutes ago
[-]
Ha :) I'm not building models, nor am I affiliated with any big labs. The idea is to use this to educate people how to spot tells of AI writing. Although like any data that's made open this can be used to train future models as well I suppose.
reply
valeena
59 minutes ago
[-]
Was able to get an 8x streak. The question that made me lose it was really hard, I basically took a guess.

Some were hard but spottable after re-reading the answers a good 10 times... ahah.

reply
eigen-vector
52 minutes ago
[-]
thank you so much for taking a look :) Yeah you'd be surprised how difficult it can get to spot nuances sometimes. Sometimes, there isn't any nuance at all and the AI is just as good at writing about something pretending to know about the topic.
reply
SsgMshdPotatoes
2 hours ago
[-]
Nice idea! Em dashes were giveaways for AIs and typos for human, at least in the ones I did, so those are at least trivial. So might have to do some filtering at least for those.

Some were hard though, yeah (at least if not looking longer than 5-10 seconds). Btw, it seemed more logical to me to just see a green/red card when you click, i.e. right choice or wrong choice. Getting red for the correct answer confused me a bit (but this might just be me).

reply
SsgMshdPotatoes
2 hours ago
[-]
Also for example this one has a giveaway for the human case: "There are lots of great people here at /r/personalfinance" (actually, not sure if that is a giveaway, that was my guess, but depends on how the model was prompted, I guess). And human ones often seem to have two spaces sometimes instead of one, idk why. If you want to get a serious dataset, maybe you could use this one to find all the flaws and perfect it, and then try to get a real dataset from the next one? People will be more eager to help too if they've seen you designed it all very carefully. (Or you could filter the results from this one to make it a good dataset if you get lots of responses.)
reply
eigen-vector
2 hours ago
[-]
You'd be surprised at the nuances we tend to miss :)

This time around I prompted the models not necessarily to be adversarial - i didn't ask them to try and fool the reader. But i gave them contextual info - something to the effect of "you're a user posting on hacker news"

reply
SsgMshdPotatoes
1 hour ago
[-]
True, if you look for all "obvious patterns" and filter those out of the dataset, not much will be left probably. Maybe the best is then to just publish as complete a dataset as possible, so all questions, all user answers, for each user the nr of questions they did, time for each question, etc. Then people using that dataset can draw their own conclusions.
reply
eigen-vector
2 hours ago
[-]
Thanks for checking it out! The color signal is useful feedback. Let me think about it and rework!

Yeah there are some very obvious tells, but the models that are most capable are very good at writing like human.

Especially when the human responses for reddit or HN prompts were presumably made after reading the content of the article or the post; whilw the model is simply going off of the title.

reply
lucastonelli
2 hours ago
[-]
The coloring is a fair point. I was some times confused if I got the right or the wrong one XD
reply
lucastonelli
3 hours ago
[-]
Hey, congratulations on the final product. It even feels fun. Some are really hard, but some feel blatantly obvious. I don't know why though. I guess it's just because the way we communicate feels off when compared to AI, some times.
reply
eigen-vector
2 hours ago
[-]
Thanks for checking it out! The obvious ones are (hopefully) weaker models :) but yes my experience has been unless you're engaging with human written content consistently the line really blurs easily.
reply