I wish the team can either restrict new accounts from posting or at least offer a default filtering where I can only see posts from accounts with certain criteria.
I don’t want to see HN becoming twitter, which is full of bots and noise, as this would be a really sad day.
I do think this is relevant though: "HN can't be immune from macro trends" - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.
As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate.
(a bit more about this at https://news.ycombinator.com/item?id=47056384, with a reply from the OP)
So maybe some sort of filter like that? Only show it to those kinds of accounts at first?
The downside is that if that group isn't big enough you get a lot of groupthink, but if your sample is wide enough, it can be avoided. To be honest, I don't recall why we stopped doing it.
Rumor had it that there is also some kind of social-network metric detecting when socially adjacent accounts (or alts) are engaged in astroturfing, the practice where a small cabal tries to pass themselves off as a broader grassroots campaign.
Flip that around though and the same metrics might allow new accounts to be meaningfully vouched for by existing ones.
Well, the simplest automated method would be to run the post and comment together through an LLM with a prompt that's roughly:
"Is this person claiming to be the author or co-creator of the work discussed in this submission?"
Only green accounts subject to it. I predict you'd probably have a very low false positive and false negative rate.
It's of course a terribly slippery slope. My perhaps overly-cynical take is that once the infra is place some of your bosses would be prone to eventually abusing it.
Personally I'm here for it: Dang, moderator turned whistleblower—on the run from dark VC money—in a race against time to save freedom. Still working on a title for the film.
what I’m seeing is new or sleeper accounts that have been idle for over a decade with low (<99) karma getting into comment circles. Over the last couple of weeks i’ll see several top comments on articles with back and forth between other similar accounts… it’s got to the point that I check a user habitually before I even bother reading… and I have never hidden so many comments before getting to something substantive in the comments…
Like many here, I don’t wish to limit new users, but this does seem from my armchair perspective to be a pattern to be on the look out for.
I've noticed this kind of behavior on Reddit but never on HM
Would seem to require some discernment to classify. Not all assistive use is slop.
> Not all assistive use is slop.
That's right, and the key is to discern which posts/projects are interesting.
It's obviously says LLM to me at first read through.
I suspect that:
a) less people are willing to expend a bit of energy to notice LLM usage given how much of it is. ("we've lost" theory)
b) that people are losing the ability to detect LLM submissions. ("we're cooked" theory)
or c) that people don't care about the use of LLM. ("who cares" theory).
Personally I've been feeling less invested, because it seems as if most users don't care and even the main users of the site don't notice it.
Reddit has forums where you need a minimum karma to post to certain subreddits and that is typically upvotes on your comments, but it could also be upvotes on someone else’s moderated subreddit.
1. some interesting projects gets to HN main page
2. author of the project is not on HN so creates a green account and interacts
even if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussion.
For sure a problem worth considering.
I can't think of anything easy...
Only even remotely sensible thought I have at present:
We add a check box to replies created by new accounts. Maybe created by all accounts?
The prompt reads something to the effect of: I am mentioned in the article. And then they get to say how.
-This is my project -I am mentioned by name -Etc...
Whatever it is they wrote, appears somehow, maybe as a required line or something.
Others can see that and either flag the account or vouch.
This at least some what distributes the required attention load.
That said, I don't like it. Have nothing better, so here it is!
Then others seeing that
This is a fundamental part of how HN sees its own functioning; they refer to it as "rate limiting".
This doesn't mean it doesn't have a strong sense of what bad behavior is. It clearly does.
Everything else seems to eventually cause new blood to dry up.
And about the same frequency you’d be assigned to metamoderate, basically being asked if a moderator’s “vote” was a good one or not (you didn’t have to fully agree you’d do the same, just that it wasn’t bad).
Someone who scored low in meta moderation would get less or no moderator chances.
It's really not a problem that can be solved easily :(
But if someone wants to plant 20 new accounts, grow them out with karma votes, so that they can game the voting, there are probably other ways to detect that.
We rely on friction for most of our social norms.
With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.
The main subreddits will basically shadowban you until your account is aged and has more than X karma.
There are a lot of flaws, though. Their appeal system is very broken, for instance.
Reddit hasn't been as overrun by bots yet, for the most part, although how long they can hold out I don't know.
We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.
This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.
It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.
And, as another commenter mentions, if someone shares your work, you should be able to comment on that thread without delay.
(And I stuck around after, a few posts are interesting enough. All the AI stuff isn't, and there is too much of that unfortunately.)
Actually cross the will out. They are already doing this to avoid the green smell. This account replied to me today. 4 months old, but only started posting today. https://news.ycombinator.com/user?id=BelVisgarra
Oh damn, that's the one who posted the AskHN about the verified job portal on the frontpage today. Either this is some chilling still in build up, or it's an actual human being with severe LLM slop impersonation derangement syndrome.
It's still a small minority of comments, but it's definitely getting a problem and just the chance — even if it's small one — of talking to a bot, rather than a human causes inhibition. Finding out that one has been talking to a bot is finding out you've been scammed. You invest time and human emotions into something for another human to read, even if it's just a quick HN comment, just to find out that it was all for nothing. It sucks the humanity out of it and thereby out of oneself. You get tricked into spending your valuable limited human social energy on soulless machines with infinite capacity of generating worthless slop instead of on other humans.
I was able to have discussions where one party has significantly unpopular opinions. Such discussions are unique to HN, please don't kill them.
Example[.]com
But don’t worry, HN has been thoughtful about links from new accounts for months and months (can’t speak for longer, but maybe/probably). Effort could well be duplicative unless I’m unaware of some more granular detail.
Mm, balancing against “long-time lurker, made an account just to post this”…
New account can be invited or vouched for by an old account with good karma. If an account that you vouched for starts spamming and/or slopposting, you lose your vouching for abilities for a period of time or forever.
If the discussion is related to a public project, like in the examples in this comment: https://news.ycombinator.com/item?id=47303604
...you can use existing communication channels (website, readme on github) to ask people for an invite to participate in it.
I don't think the solution is changing the dynamic but flagging, this site self-moderates quite well, aside from dang and tomhow's great work.
It’s perhaps unintentional, but your framing makes it seem that this is a baseless whimsy.
At this point, it appears that we will be talking to bots more than humans.
It’s a brave new world, and not adapting to it will see the humans leave.
Because if new account restrictions create enough friction, you lose legitimate users who periodically rotate accounts for privacy reasons.
At some point the annoyance tips toward just lurking, and a forum where only old accounts talk is a stagnant forum given enough time.
Feel free to send me an email (findable via my HN profile) mentioning that you found it via this thread, and I’m happy to extend an invite.
I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.
I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."
It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).
It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.
1. ideological and/or economically motivated actors will just see it as a cost of doing business.
2. Ordinary sign-up friction is more likely to make HN appear ordinary to anyone who stumbles upon it.
3. Sign-up friction is a moat. The strength of HN is moderation of what gets in.
> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
https://web.archive.org/web/20260228135203/https://www.brain...
https://www.azquotes.com/quote/351103
https://web.archive.org/web/20250713080832/https://www.usmcm...
I'm well aware that the cyberlibertarian ethos endemic here opposes any form of regulation. But when the status quo clearly isn't working something has to change. Parent's have failed to step up and do their jobs. Somebody else has to.
I'm curious to hear what benefits you think can be gained from avoiding this.
Suggestion: Pick a long term account, dump the comments, and see what an llm could figure out about the target
If you intend your accounts to be thrown away, you will likely behave worse.
*I'm using "you" generically, I don't mean you specifically.
People share things that they often wouldn’t. And somehow the culture remains mostly civil. It’s a pretty fantastic forum IMHO.
Changing the rules would surely change the vibe, so to speak.
But will it continue under all the login id surveillance laws coming up?
even if they did ban me: the account was going to be deleted in a short while regardless. So that fear isn't present for what's essentially a longer lasting throwaway.
When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).
At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years.
The standard solution is using an email to register account, maybe a cloudflare captcha, and then using good network logging to group accounts by IPs and chainbanning abusive accounts when they are caught by other mechanisms.
My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'.
A lot of users don’t seem to realize that anyone can click on the domain in a "Show HN", and Hacker News will show you all the times that domain has been submitted. So you’ll see four or five different low karma sock puppets accounts that have all submitted the same site.
The HN culture has shifted drastically over the past 5 years.
Likewise for guideline-abusers. I don't really know what heuristic you would use to detect rules abuse, but I imagine there are at least some clear violations that could be detected.
Finally, I think I'd make one account for sock-puppets, another for guidelines-abusers, etc, so people can 'subscribe' to whatever degree of 'highlighting' that they want.
created: August 8, 2021
karma: 2686
Account made in 2022 is dodgy. Accounts made 2023-forward that have a hint of LLM speak or are only spreading divisiveness get an immediate red flag from me.
I don't want to make HN harder for legit new users, but I do think a bit of community participation is reasonable before posting a Show HN, so it isn't just a box on some "how to promote your project" checklist.
- A landing page that looks exactly like every single AI generated landing page ever, I don't even need to describe it, you already know what it looks like
- An article or blog post headered by an image with the Gemini logo in the corner
- A Github repository with CLAUDE.md or AGENTS.md and/or 50 large commits made in the span of a day
I'd estimate that more than half of new submissions now fall into one of the above categories.
It's easy for people to game but it's at least one more effort-based hurdle.
Can't allow low-quality posting from new accounts here but thank you for listening to the concerns.
A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.
The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).
The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum?
Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.
That's the day the site started its death spiral.
Getting called a fascist and rehashing how “no, you’re libertarian politics are fine, but can you please just start your own sub” in a long, drawn out, hateful, back and forth gets exhausting after the 200th person who comes to the bicycling subreddit and feels they should be allowed to endorse harming cyclists with their vehicles.
Everyone got mad at spez for having the audacity to fuck with these kids, and there is a point there, but after living with it, I could see myself doing the same damn thing.
And on top of that, some of said "volunteers" are power-hungry, petty, useless fucking morons. Especially the large subreddits tend to be run by people I wouldn't trust to boil some pasta without triggering a fire alert, and yes I know people who manage that.
I still love Reddit for all its flaws though.
IMO New accounts should be restricted from creating new posts, or at least certain kinds of new posts.
Replying shouldn't be restricted. That is how users interact with each other and learn the etiquette of HN.
If "farming karma" is a thing, maybe that forum deserves what is coming. Either the karma mechanic is inappropriate given the demographic, or it is too hard for the users to avoid upvoting bots.
Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus.
I'm not opposed to AI automating away stuff no one liked doing, or even more utilitarian things in general, but robots posting on social media and discussion sites seems antithetical. I don't know what the point of talking to a robot would be when I could talk to Claude if I wanted to do that.
I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?
Github star farming, SEO, etc
It does take the handcraft out of it, in that sense an LLM-made tool would be more akin to IKEA stuff compared to a handcrafted work of art (though I struggle to call even hand-made electron crap a work of art, lol).
But yeah I know what you mean, they are usually half-finished solutions.
I'm using a new account and will likely use one forever, as I don't want lots of posts linked together, nor do I care about points or karma or whatever it's called. My first few comments are always shadowbanned. I also see lots of dead posts for new accounts with "showdead" turned on. A lot of them are normal, useful comments, some are inflammatory or just plain stupid. I haven't seen many comments that seem to be AI generated. Maybe they are and I just don't see it, idk.
Anyway, if a comment passes some basic filter (doesn't post shady links or talk about VIAGRA or 11 INCH PENIS or something spammy), I hope they still show up, even as "dead". On this account I copied 1 dead comment to give it more visibility and I've done it before a few times, too. The comment is still dead, btw (id 47262467). And maybe instead of (shadow)banning new users/posts, just make a separate view for old/established account and another one for all posters.
I would also be glad if I could solve some CPU- or RAM-intensive task as PoW. If I really had to, I'd pay with Monero or something similar, as long as it's an anonymous currency with low fees so a payment equivalent to 25 cents wouldn't incur a big fee. I wouldn't pay more per account (especially when I rotate them), as I've been a lurker for years and only recently started posting, anyway (so I don't care that much if I can post).
Finally, thanks for letting us sign up over Tor. :)
EDIT: I meant (but totally forgot) to qualify that my "proposal" would only apply when the LLM-ness is self-obvious—idk, make up a "reasonable person" standard or something. Presumably, the moderators would err on the side of letting things slide. Even so, many comments I've seen are simply impossible for any reasonable person to claim as "human-written"—the default ChatGPT style is simply too distinct.
It pretty much is. It’s not hard and fast (sometimes we’ll warn people or email them to ask if it’s not certain) and it takes time for us to see things and act, especially when people don’t email us when they see these comments.
But as a general rule, accounts that post generated comments get banned.
I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :)
(I didn't, and I thank everyone involved for the nostalgic moment. Also, shout out to Dr. Sbaitso!)
We had people defending the fired Ars Technica guy, even though he admitted to using an LLM in some sort of a contrived non-apology along the lines of "I did it because I had a cold".
My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.
But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).
At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.
Some is also horribly easy. If the text is full of:
- Overly positive commentary and encouragement
- Constant use of bullet point lists, bolding and emoji
- This quaint forced 'funniness', like a misplaced attempt at being lighthearted
- A lot of blablah that just missed the point
- Not concise and to the point, but also not super long
Then that really screams ChatGPT to me.
I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.
HN always downvotes attempts at humour, be them chatbot or brain generated :)
And it's becoming more and more difficult - not just by AI getting "better" (and training removing many of the telltale signs), but also because regular people "learn" to write like an AI does. We're seeing it with "algospeak" - young terminally online people literally say stuff like "unalived" in the meatspace nowadays.
We're living in a 1984 LARP.
Maybe there can be a dedicated 'flag botspam' button?
Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well?
We already have flagging and downvoting?
What does [flagged] mean?
Users flagged the post as breaking the guidelines or otherwise not belonging on HN.
In other words, submissions get flagged that users believe don’t belong on HN. LLM-written submissions can be one such case.
Community moderation won't fix this problem. It can only be mitigated if the site owners invest significant resources in addressing it. And judging by how little YC actually invests in HN, I wouldn't hold my breath. This website will succumb to this problem just like most others.
It is against the rules though
I find the above comment concerning, so I ask: to what degree is the above commenter calibrated to ground truth? How would they know? How would we know?
[1]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...
It seems to me comments like the above are overconfident in the worst ways.
But yeah, there isn't a way to prove it one way or the over, even when it's "obvious".
I saw in some schools they're using systems where you have to type the essay in a web app, and the web app analyzes your keystrokes to determine if you're human.
If not already, then soon
(i.e. it was obvious in the first place, think along the lines of a ticket about a screen loading slowly, and then multiple paragraphs explaining the benefits of faster-loading screens.)
I would argue that those cases are really the ones that cause an LLM-specific harm, i.e., which make people feel like they aren't exclusively among fellow humans.
If someone posts something that doesn't clearly read LLM-ish, but is otherwise terrible, it's not really different from if the same terrible thing had been written by hand.
I don't think anyone who objects to LLM comments is really demanding a super-low false negative rate. Just get rid of the zero-effort stuff. For example, recently I've seen a lot of comments from new accounts that are just sycophantic towards TFA and try to highlight / summarize a specific idea or two, but don't really demonstrate any original thought (just, like, basic reading comprehension and an ability to express agreement). And they'll take a paragraph to do so, where a human with the same level of interest in the material might just say "good post" (granted, there's an argument to be made for excluding that, too).
https://news.ycombinator.com/threads?id=aplomb1026
....until you start scrolling down the page and it becomes screamingly obvious that everything it says comes from the same template.
Maybe the problem isn't just that AI produces gobs of useless crap. Maybe what's worse is that it can produce even more mediocre crap that crowds out the good?
All oatmeal, no steak, leads to "starvation" by poor nutrition.
I don't think your account is AI just by these few comments, but I would like to point out that most rubrics one might use to determine what is obviously AI might end up including the way you talk.
If there was a truly accurate tell, some algorithm you could feed a few sentences in and it could tell you "yep, this is 100% AI", then yeah sure use that. I don't know you could realistically build that machine, especially when it comes to the generation of text.
Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.
I've always found this funny. Doesn't macOS' default text substitution enable (annoying to me) things like em-dash, smart quotes, etc?
Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.
If anything should be banned, it's low-effort "This is AI" commentary. It adds absolute zero to the conversation.
1: https://news.ycombinator.com/newsguidelines.html
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
I'd argue that: whether or not the article (or reply) was written by AI is a tangential annoyance at this point.Formats, name collisions or back-button breakage are tangential to the content of the article. Being AI generated isn't. And it does add to the overall HN conversation by making it easier to focus on meaningful content and not AI generated text.
Basically, if the writer didn't do a good job checking and understanding the content we shouldn't bother to either.
The number of comments I see complaining about "it's not this, it's that" and other "LLMisms" definitely frustrates me more than the original content.
Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication. Farming any of these steps out to an LLM completely breaks down the social contract involved in participating in an online forum like this. What's the point?
It's the exact same effect that's playing out in many other areas where LLMs are encroaching: bypassing the "human effort" step has negative side effects that people who are only looking at the output are ignoring.
I actually find your opinion so infuriating that it's taking all my composure to not reply with something nastier. If you guys want to spend your time reading shitty LLM spam posts with shitty LLM comments, why don't you find another site to do it on instead of destroying this one.
In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.
Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.
What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).
It is welcome though. Being on the front page regularly is evidence that people enjoy it or find it informative.
You may feel that others shouldn't be ALLOWED to enjoy it, but that's just your opinion and is almost always tangential to the actual topic.
Worse, you seem to believe that it needs to be labeled to help you identify it. Why? If its good enough that you need help to spot it then its obviously of sufficiently high quality.
What makes you think that it's people who get it to the front page anymore? Or that most people aren't simply fooled by technology designed to mimic humans?
> Worse, you seem to believe that it needs to be labeled to help you identify it. Why?
Why not? Would adding a label and providing filtering capabilities hurt anyone else's experience?
Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.
A lot of blogging is essentially self-expression and that stuff won't be taken over by LLMs (it defeats the whole point). Other blogging is done with some kind of sales/promotional/brand purpose and the extent to which LLMs will dominate this will depend on how we as a society react to it (see the AI art battles) since if people react negatively to it it becomes counterproductive.
I understand where you're coming from. I've been posting complaints about LLM-written articles almost as long as I've been here. (My analysis is definitely more complex than a search for blacklisted Unicode characters or words.)
But I've let off on that, partly because I agree the guideline is meant to encompass that kind of criticism (same with my comments about initial page content not rendering with JavaScript, honestly) but largely because it just seems futile. It's better material for a blog post than HN comments (and would be less repetitive).
> Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication.
Not anymore. Bots are now the majority of producers and consumers of all content on the internet. The social contract you mention has been broken for years, and this new technology has further cemented that.
Those of us who value communication with humans will have to find other platforms where content authorship is strictly regulated, or, at the very least, where tools are provided to somewhat reliably filter out machine-generated content. Or retreat from public spaces altogether.
FWIW I have very little hope that this issue will be addressed on HN, considering [1].
Incidentally I foresee similar issues to this training data pollution arising with LLM coding taking over software engineering--which it inevitably is going to continue to do, at least in the short term. If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today? It feels to me like we risk technological stagnation as our collective skills atrophy and the market value of our skills plummets. Kind of like airplane pilots forgetting how to debug planes or handle edge cases because they just rely on autopilot all the time.
Low value content is still content, written by a human being with a specific point. I would argue that LLM written content is even worse than that, because what value does it add when you or I can just ask the LLM itself for it? Its existence is solely that of regurgitation.
Example: https://news.ycombinator.com/item?id=47122272
You have to scroll a few pages before the actual article is discussed.
"This was LLM generated" is likely to float to the top of an article. That's where the best comments about the article deserve to go, not an off topic comment. An AI label should be much less obtrusive.
Or you could collapse the one thread containing those comments.
1. Your guess is not always correct
2. Over time, AI content will get harder to guess until it is indistinguishable from human content
3. You're not helping anyone by posting "this is AI". Maybe it is, maybe it isn't, but it's not helpful. It just adds to the noise.
Ideally there could be a label on the submission that states it's AI
A lot of people tried for #politics and that didn't work. I doubt you'll get #ai.
How very interesting.
Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.
And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.
And what about users like this, whose comment are very much entirely LLM generated and possibly even a bot? https://news.ycombinator.com/threads?id=BelVisgarra
I might agree (don't know) with the idea of limiting new accounts more heavily.
I think the point here is that the community doesn't want to read AI slop, not that using an LLM to clean up your writing contains some inherent evil that prevents quality.
I don't want to accuse you of strawmanning the argument, but honestly, where did you ever see someone advocating the latter?
> Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.
Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.
There are a lot of people who understand English fairly well, but are not actively learning the language, are not native speakers, and can use LLMs to catch grammar mistakes that they otherwise wouldn't notice. Or catch small nuances in what they are saying, small implications that could otherwise go unnoticed.
In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".
Is that genuinely what you think most of the complaints on HN are saying?
IMNSHO that's an absurd statement to make about the other side of the argument. I'm still giving the benefit of the doubt here but jeeze, this really smells like a strawman.
There are dozens of whole classes of criticism of these tools that I see made on HN, and none of them fall into the category you described.
Ex: Saying "juniors who rely on Copilot/Claud/etc become lazy and can create low quality code without learning how to do better" is night and day different from what you're saying. And that's a criticism that must be addressed or the entire global software industry will destroy itself in two generations.
Surely the difference between that and "we don't want anybody to use Grammarly in their subs that show up here" is completely obvious, yes?
/heavy sarcasm
That being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for.
I would be so screwed. :-(
> worthy of an instant ban
First, it is not always possible to identify an LLM-generated comment. There are too many false-positives. Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?I have no idea what that could be useful for, but since the Turing test is now essentially beaten maybe its usefulness has come and gone too.
> Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?
It sounds like a fast, efficient, inexpensive and foolproof recipe for destroying a community. Let's use that as a future test: anyone who advocates for it is undeniably trying to destroy HN, so they get downvoted to 1 karma and permanently blocked from voting on anything else.
What if someone used an LLM to just translate?
But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.
So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome.
Wow this is really cyberpunk.
I'll bring my Yubikey!
I do like your idea, though.
Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.
Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims.
[0] https://handmadecities.com/memos/HMC-Memo-004-Meetup-Hosts.p...
I'd also like to see an "Order of the White Lotus" community (or Fight Club if you prefer) where people who collectively agree to not use AI against each other can come together. They can still use AI (i.e. out of necessity) just not with other members knowingly.
I suspect whatever form it takes the stakes will be very high to hack yourself into and pollute the space. So the more successful the community becomes, the harder it is to keep in order.
p.s. @patrickmay: jinx!
If you are Sisyphus, the fact that the hill is infinite is useful when planning your day.
We have genAI generating videos and the quality sucks compared to human produced and filmed content. People call it out and nobody is going to watch a genAI movie at the theater or binge a genAI TV show. Merit based filtering.
GenAI for music is not as good as human-generated music either. Not a single AI song from Suno or Udio has reached the top40. Not even one. 100% of the songs are human because they are evaluated on merit.
We have SWE and agentic benchmarks to evaluate coding LLMs on merit.
Disclaimer: I am a new account.
Welcome. Illegitimi non carborundum.
This falls apart as soon as you realize that evaluating the text requires far more effort than generating it. If you're spending 2 minutes reading text that took 2 seconds to generate, you already lost.
The HN user base is not perfect at detecting LLM content but a lot of it does get flagged and downvoted eventually. About once a day I’ll click on a link, realize it’s AI slop, and go back to HN to flag it but discover that it’s already flagged.
If you turn on showdead you can see all of the comments from LLM bots that have been discovered and shadowbanned.
The fallacy in the comment above is simple: It’s taking the current situation and extrapolating to an extreme future, then applying the extrapolated future prediction on to the current situation. The current situation does not represent the extreme future predicted. A lot of the LLM content is easily spotted and a lot of it is a waste of time to read, therefore it’s right to police and ban it. Even if imperfect.
I'm not sure we can. Imagine an AI that 1) creates multiple accounts, 2) spews huge numbers of comments, 3) has accounts cross-upvote, and then 4) gets enough karma on multiple accounts to get downvote privileges. That AI now controls the conversation. Anything it doesn't like, it can downvote to death.
I mean, I'm sure that HN has a "voting ring" detector, but an AI could do this on a sufficient scale to be too large to register as one cohesive group. And I think HN has a "downvote brigading" detector, but if the AI had enough different accounts, I'm not sure that would trigger, either.
The best chance to detect it is just on volume (or perhaps on too many accounts coming from the same IP address or block). But if the AI was patient, I'm not sure even that would work.
That's depressing. I don't want HN to become a bot playground, with humans crowded out. But I'm not sure we can stop it, if it was done on a large enough scale.
The OP is talking about posts, not comments. The simplest solution might be to prevent someone from posting a "Show HN" until they’ve earned twenty-five or fifty karma, to demonstrate that they’ve been actively participating on Hacker News rather than using it solely to promote themselves.
It’s a speed bump at best.
Honestly, we don’t really have the same cold start problem that a brand new social media site would. We already have plenty of reputable active users here. So HN could restrict new accounts to only being able to comment initially. As they participate, their comments receive upvotes, allowing them to build up enough karma (even a small amount of 25) which unlocks the ability to upvote, and then, finally, the ability to create posts.
Actively encouraging this will only make things worse.
I'd rather see you gone than the people you complain about.
If everyone votes purely on basis of the first letter of the username, to use your example, then the votes provide no useful information and you may as well abolish it.
If someone in a chatroom for example is being spammy with their messages at the expense of noticing posts one finds more relevant then blocking them isn't due to considering them some filthy pleb but improving their experience. If the user being filtered never becomes aware there's no reason to be offended, either.
Edit: also I wasn't the one to downvote you if that makes any difference.
Minimum karma and account age filters are discriminatory, anti-social features that should not exist on any social site. The people asking for such features are intolerant jerks, no different from ageists or ableists. They are parasites, because they want the people who are not intolerant jerks to do their filtering for them, and keep the site alive by doing so.
What would happen if every single user enabled their minimum karma filter?
The comments here are about possible mitigations. Based on this feedback dang has apparently now restricted new accounts from posting Show HN threads, so globally now there is a form of filtering users from being seen by others based on a heuristic.
Your initial comment is written with the impression that the poster wanting to improve their chances of higher effort content is making some judgement on the posters themselves as though they're conceited ('filthy masses', 'your royal highness') when they're merely considering one approach to reducing noise from their feed.
I myself in this very comment chain have already posted that I disagree that filtering by karma would help due to gaming issues but I don't see the problem with the user's goal.
Hacker News would be a much better place.
In fact, filter stories as well as users. I want to filter out any story with fewer than three upvotes and any flagged comments. That would improve quality tremendously.
Again, this system can only work if there are at least _some_ people that are willing to upvote newbies and read new posts.
It sounds like what you want isn't a community with collaborative filtering, like Hacker News, but a newsletter with editors, like Slashdot for example.
What I want is for green accounts not to be abused as much as they are. The number of noxious, vitriolic troll alt accounts and bots is getting ridiculous. That is almost entirely the fault of established users of course, but there's no way to deal with them poisoning the well without affecting new users.
Aggressively filtering to raise the average post quality is a sugar rush and it has the metaphorical long term consequences of type-2 diabetes. Things start out feeling great but the acceleration of death is effectively guaranteed.
I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.
While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.
Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?
I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?
If a human put his effort into it, is proud of it and wants to show it to the world, i'm happy to invest some time to have a look at it and maybe provide some helpful feedback.
I'm not willing to invest my time into evaluating the more or less correct sounding ideas of a ML model.
Some of us need assistance to communicate effectively. And for me, yes that took 3 hours even with this assistance.
i.e. only surface stories posted by or upvoted by those you trust, and the inverse with those you distrust.
Then exponentially drop off trust transitively and it could be almost workable.
Most I have encountered (generally via referral tracking) are heavily curated centrally though, and not by users.
X vs BlueSky is a thing after all. Reddit, wikipedia etc. are just farcical.
Like Facebook/Linkedin?
Linkedin more so than facebook, facebook shows list of common contacts, linkedin shows that plus a literal resume.
Other subs are slowly being inundated with hidden history spammers …
Bad times.
I once proposed a scheme like this where you would donate to charities who would post lists of serial numbers they had received, for this purpose, but it never got anywhere. Maybe we need it more now than we did then.
I guess instead of mailing a $1 bill, if necessary it could be a hand drawn picture of a kitten (artistry not required). Authentication would involve checking the paper for pressure marks made by the pen. I wonder how many would take the trouble to fake that.
It feels wrong at first to pay for commenting on a forum, but the alternative is almost always a gentle slide towards a trash dump. AI means that slide is almost a vertical slope.
Your post advocates a
( ) technical ( ) legislative ( ) market-based ( ) vigilante
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
( ) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
( ) It is defenseless against brute force attacks
( ) It will stop spam for two weeks and then we'll be stuck with it
( ) Users of email will not put up with it
( ) Microsoft will not put up with it
( ) The police will not put up with it
( ) Requires too much cooperation from spammers
( ) Requires immediate total cooperation from everybody at once
( ) Many email users cannot afford to lose business or alienate potential employers
( ) Spammers don't care about invalid addresses in their lists
( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses
( ) Asshats
( ) Jurisdictional problems
( ) Unpopularity of weird new taxes
( ) Public reluctance to accept weird new forms of money
( ) Huge existing software investment in SMTP
( ) Susceptibility of protocols other than SMTP to attack
( ) Willingness of users to install OS patches received by email
( ) Armies of worm riddled broadband-connected Windows boxes
( ) Eternal arms race involved in all filtering approaches
( ) Extreme profitability of spam
( ) Joe jobs and/or identity theft
( ) Technically illiterate politicians
( ) Extreme stupidity on the part of people who do business with spammers
( ) Dishonesty on the part of spammers themselves
( ) Bandwidth costs that are unaffected by client filtering
( ) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck
( ) Whitelists suck
( ) We should be able to talk about Viagra without being censored
( ) Countermeasures should not involve wire fraud or credit card fraud
( ) Countermeasures should not involve sabotage of public networks
( ) Countermeasures must work if phased in gradually
( ) Sending email should be free
( ) Why should we have to trust you and your servers?
( ) Incompatiblity with open source or open source licenses
( ) Feel-good measures do nothing to solve the problem
( ) Temporary/one-time email addresses are cumbersome
( ) I don't want the government reading my email
( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
( ) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out where you live and burn your
house down!This site is designed so that the wannabees are incentivized to lie and show off to get some of the sweet VC the whales are sitting on. The ease of lying at volume is down to zero, and here be nerds trying to solve a human problem with technology. Maybe show first that you can solve spam or bot networks.
Somehow lighthearted solution: employ Unix graybeard volunteers to weed out the garbage. I'd like to see HN showoff slop like "Distributed Kubernetes Package Manager using Blackwell-Hermann CRTDs in 500 lines of Go" get past Linus or Stallman.
345 comments | 64 hidden | 50 blocked | 15 green
So I don't see people who annoyed me for one or other reason in the past, I auto-hide the top 1000 accounts by word count, and I hide all green users. This was trivial to write for myself and I think more people should work on something like this for themselves.So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.
One example: https://news.ycombinator.com/item?id=46884481
Read the comments and you'll see it took time and effort, from people who know at least a little about what they're saying, to point out that it's AI slop that doesn't live up to the claims written in its own docs.
I think that's great in moderation as it stimulates ideas and discussions, shows us what folks are working on, etc... but this can't become Product Hunt. The reasons for posting here should be vastly different than posting on Product Hunt.
This also appears to cause a serious shift in the kind of projects that are submitted (i.e.: towards things that are much more accelerated by AI assistance).
That being said, there is an above average, low quality submissions sub-trend, that are obviously trying to plant a money tree. This is largely driven by the "look ma, no hands" Ai tools like OpenClaw, mixed (venn) with the crypto crowd looking to make easy money with near-zero effort.
With that being said, I have definitely seen some real bangers that have large Ai contributions. So I am generally in favor of minimally changing how HN works today. One small change would be adding to the Guidelines and FAQ, giving the agents something to read before posting (such that they know that automated submissions are not allowed[1])
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Also the purpose of Show HN along with HN in general is to spark intellectual curiosity and create interesting conversation, and nothing about LLM generated code does that, because the person who prompted the AI to make it doesn't understand it and can't discuss it in any depth.
They'd assume this, even if they hadn't used AI, and even if AI didn't have to ability to pull it off.
Not sure I understand your 2nd argument though?
Sorry, I meant in the context of that original dev their earnest fixation/obsession with their creation came across in their personality that I think made people sympathetic.
randusername_2022
I'm right on the boundary of the slopocene, not sure if in or out.
Am I too late to get ahead of the curve and stockpile some, while they're still relatively cheap?
the problem is that once this is found out, the circumvention is easy enough to program into bots/LLMS.
are we going to reinvent the voight-kampff test from bladerunner?!?
But then again, some of the most prolific, most upvoted accounts on this site constantly flood the site with political content and nothing is ever done about it and they get rewarded for it .. so yeah. I gave up hope a long time ago.
My initial thought is to set up a devoted account like "sock_puppet_detector", use the infrastructure from https://hackersmacker.org/, and add any likely sock-puppets as 'foes'. Then anyone can install hackersmacker, and add "sock_puppet_detector" as a friend to see sock-puppets highlighted. Likewise for rules violators.
Losing that seems too high of a price to pay. Yes there are AI generated comments, in the past there has been script generated comments. You can report, downvote, or just ignore and move on. I am aware of posts like this existing, but I feel they are being effectively managed.
Try not to be too offended about the notion of these posts existing. Many of them are not malicious, they just caused by users stepping outside what is considered appropriate, but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
Yes, and sometimes some of the HN automatic filters kills the comments. Remember to "vouch" the comments if they are interesting/relevant, a few "vouches" unkill the comments. And in extreme cases, send an email to hn@ycombinator.com so dang/tomhow can take a look and use some magic to fix the problem.
Assuming the mods just auto-ban new accounts and require them to be vouched and to earn minimum karma before being visible, those comments can be vouched up or approved by the moderators. The poster won't know that they've been banned, of course, because that's how shadowbanning works, so the approval process should be seamless for them.
But how often does that happen versus the AI comments and alt account trolling?
>but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
The consensus is and has always been clear. Generated comments of any kind have never been allowed. People just don't care, and that's a problem.
And those comments are malicious in effect if not intent. We're here to have conversations with human beings, the intellectual and emotional connection is important. What is the point of having conversations with a machine, much less not knowing one is having a conversation with a machine? If nothing else, it's dehumanizing and a waste of time.
I don't believe that is possible. I think malice requires intent.
The fact that Hanlon's razor needs to be said at all is to warn people about attributing malice to instances where none exist.
"Never attribute to malice that which is adequately explained by stupidity."
It's not saying that instances that actions attributable to malice do not appear. It's saying it usually isn't malice.
Bots are recognizable and can be selectively ignored. But an echo chamber that would result from measures like this cannot be, because you cannot see the potential comments and posts that were snuffed because some one didn't bother.
If you want HN to be a place to feel comfortable and your world view to be unchallenged, sure, go ahead. But then we already have reddit.
Moderators don't have the capacity (and fairly, it is impossible) to check if they are bots or humans.
There are no good solutions, there are hundreds of thousands of intelligences out there, trained millions of hours on how to scam humans, capable of spitting out text tirelessly and shamelessly, and there will be only more of them, tens, hundreds, thousands times more.
If you look at the leader board (https://news.ycombinator.com/leaders), you'll find a few old accounts that pretty much do nothing but farm links, posting sometimes dozens of times a day, with a very low percentage of comments. Their high "score" isn't an indicator of quality; they just spam enough that a few get some good upvotes, but most of their submissions are low quality.
Each one gets 4-5 karma, a few crack double digits. Post 10 or 20 a day over a year or two and they're five figures. Pure farming.
..or are they selling the accounts?
Btw, restricting new accounts (based on karma/age/whatever) could be combined with the option to ask mods for permission somehow, although that'd have to be done in a way that that doesn't become too much work.
The system has long been that anyone can email the mods and ask us to review their project, but the volume has grown so much in recent weeks that it’s hard to get much else done.
Additionally, dang had replied on it: https://news.ycombinator.com/item?id=47050421
1. Exist for some time.
2. Vote on stuff that humans would vote for.
3. Avoiding voting on traps.
4. Comment occasionally and productively.
5. Post to a limited existing audience, and receive upvotes.
6. Post limitedly to a general audience.
7. Post generally.
It’s basic earn a reputation behavior.
I'm not saying your idea is bad necessarily but giving another perspective.
It almost feels like new accounts should be treated like new posts -- it is sort of a service that a select few are willing to undertake to upvote interesting stories early on.
I wish even more I could block specific users (there are some highly prolific, high karma users here who are extremely irritating), but that's harder and is probably best handled client side.
Such a sad development.
There are still quality submissions by new accounts and HN is good at pulling those needles from the haystack.
I believe it's a policy or moderation enforcement issue. Such as banning incomprehensible / low value posts whether generated by AI or not.
There are barely any bots on Twitter. There were thousands of thousands of bots before 2023, because the API was free. These days running a bot on Twitter is expensive.
Fun fact: a company I worked for in the past had access to an undocumented partners-only API that allowed us to register unlimited number of accounts. I was personally tasked to handle the integration.
Same reason that burglars don't typically target security camera stores and robbers don't typically target police departments - it's basically a fast-track to early detection, which disrupts the main objective of the adversary.
I think a simple solution (and one that eventually every content platform will have to adopt) is to allow users to tag AI-generated spam. I think that a few years from now this feature should be the norm, like existing basic features on forums such as upvote, downvote, favorites, hide, etc. I know this will require much more development effort than simply blocking new accounts from posting at all. But on the other hand, you can’t block new accounts forever.
In addition, I’ve been here in HN since the late 2000s. Look- it’s a new profile. Also sometimes I use AI to help craft better responses. Do with that what you will.
Unfortunately, I was not able to "reorganize" comments/posts in a manner that I felt was particular "better", and didn't keep the plugin, for whatever that's worth.
I think it would be more prudent to overlay a web-of-trust, where accounts which submitted links/comments that you upvoted are then given significantly higher priority in other threads/feeds (unfortunately downvotes are not made apparent on HN, but factoring downvotes would also help.) Exposing your web-of-trust may also assist others in determining trusted content.
Perhaps this web-of-trust approach is dystopian on the order of MeowMeowBeenz, but I have not heard any other practical solutions to the disintegration of trust which is upon us.
Edit: Elsewhere in this thread HackerSmacker was mentioned, which is what I'm describing. That's exciting, I'll be trying it out later.
Moderation is already taxing as it is.
I’ve read all of the source and I drove the architecture but it would be a stretch to say I didn’t ask for assistance on things that felt fuzzy or foreign to me. I also have generally stopped typing code. I still don’t think the LLM made the project though, it feels like my decision making.
If the bar for Show HN becomes no AI whatsoever then you’re just going to see a bunch of people covering their AI tracks. I’m reluctant to post it because I’m afraid of getting blasted by the community for using AI. At the same time, it is work that I’ve poured hundreds of hours into, that I’m proud of and that I think would be of interest to HN.
I read the Obliteratus post that made it to the front page the other day and I agree that is pure slop. While it’s frustrating that it took up front page space, it’s evident that the whole community caught on to the sloppiness of it all immediately and called it out. I just don’t think HN wants to set the precedent that no AI code should be shared.
I also saw a week or two ago that someone open sourced a project of theirs that wasn’t open source in the first place. The reason they stated was that they had vibe coded and were embarrassed to be discovered. If you want to get a concept out quickly with AI, you’re now hesitant to open source because of the precedent set by the community. I think that’s a scary thought to me. I would rather know the tools I’m using are AI generated/assisted and make the value judgement on if I trust the code and project owners.
Blatant slop is obvious. Slop with a modicum of effort is harder. I’m still adjusting my slop-o-meter on other people’s work. It’s easy for me to identify my own slop, it’s not always so obvious when looking at someone else’s AI assisted work.
Think back to prohibition. Just because we want less public drunkenness doesn't mean it is wisest to ban alcohol. One has to ask: what is the chance the ban is successful? What happens when it cuts the wrong way?
To what degree do we care about (1) "human" versus "AI"; (2) comment quality; (3) sensible methods for revealing social preferences? I care a lot more about the latter two than the first. It doesn't have to be a zero sum tradeoff, but I think it is a good starting question.
Let's have that discuss and not try to solve the human vs AI classification problem.
> I would much rather read a human comment with typos and poor grammar than another piece of anodyne LLM output that shows only that the responsible party doesn't value the human interaction that I do.
I take your meaning. However, that phrasing doesn't cut to the core of it. Rationalists would say "this doesn't carve reality at the joints". Here are my attempt to disentangle, decomplect**, and find common ground. Let me know which of these you disagree with, if any:
1. I care relatively little about typos and grammar, as long as the ideas are clear.
2. I enjoy human connection with people, in person and online. I would prefer to have a person on the other end.
3. Actually, I'd go further... I'd like to have more personable conversations and work past a lot of the common online discussion failure modes (but now I'm wandering off topic).
4. When chatting, I care a lot about the quality of the underlying thinking.
5. I personally don't want to read someone's first knee-jerk take.
6. I prefer to read a thoughtful and clear expression of a human being's experience.
7. On HN in general, I want curious conversation.
8. I understand everybody brings some point of view and sometimes what one would call an agenda.
9. Maybe the top criteria for doing #6 well is: are we interacting with each other per the guidelines? Charitably, in good faith, and with curiosity (#7).
10. As an example of an unwelcome agenda (#8), I don't want to inundated with commercially-fueled marketing-speak. However, to be clear, in this regard, I don't care if it comes from a person or an LLM.
11. Speaking for myself, not for HN, I don't mind if #6 is assisted by LLM editing.
12. Why #11? Because I care more about having a human being in the loop than a human shaping every single aspect. (See also the next point.)
13. One key for me is: does the person stand behind what they post? Meaning: are they accountable to it? Do they own it?
14. In addition to "original thoughts" (as if humans ever really have them!), #13 applies to someone borrowing, remixing, or outright stealing phrases they've heard before.
15. If someone uses an LLM to edit their words, it feels not too different from #14. Except when ... (see #16)
16. Sometimes people use LLMs to not actually put in the work of reflecting and thinking. This is sad for them and sad for people who have to read it.
17. Unfortunately, even without LLMs, some people don't put in the work of thinking and reflecting. See #5.
18. Putting #17 and #18 together, it is NOT the part about the LLM that bothers me! It is the lack of reflecting and thinking!
19. Asking someone else to read your post before sending is totally ok.
20. #15 done well may not be different (and may be better than!) #19.
21. In a forum where a comment is read many more times than it is written, I consider it more respectful to put an appropriate amount of effort into writing.
22. If a person takes the time to write something out in their own words, that is a signal of respect for the audience. Especially in comparison with, say, just replying with a trope.
23. If a person uses an LLM to research and clarify their thinking AND is thoughtful about it (#6) AND stands behind it (#13), that is a signal for respect for the audience.
Fin.***
Here's what I'm driving at: I recommend that people put the effort in to figure out what aspects bother them about this moment in time with so much GenAI output. It is a real PITA to make sense of, but not doing so doesn't make that PITA go away.
* I hope you don't mind that I used a NCLM, a neuro-cognitive language model, to construct this... a.k.a my brain. Snark aside, does the substrate matter, in the long-run?
** Rich Hickey is my home boy.
*** Why do I number my points? Well, I work hard to tease apart my ideas; it is part of my writing and thinking process. Sometimes I put them back into a paragraph, sometimes not. But I like trying it out: I think it makes it easier to refer back to ideas and build upon them.
I find it's worse here now than X. Literally every discussion turns into meta and severely politicized. Certain topics you get flagged out by a mob for stating facts.
At least on X reply bots are not allowed anymore. Blue checks are useless tho.
I disagree, but in any case the easy solution in that case is to use X instead of HN.
> At least on X reply bots are not allowed anymore.
In theory, maybe.
From the perspective of usually just swinging into a post from the front page, when I do see green, it's usually overtly political trolling, and dead from the start. So I had assumed new account = everyone sees your post in gray, at least for a week or two.
I don't envy the "Show HN:" case. It can be intractable, story time:
Last week, there was a "Show HN:" post for a GitHub link, made it all the way to #2. It was a Flutter app, written up as if it did all the stuff you'd want from an open source LLM client. I said to myself "geez, I knew I took too long to deliver the thing I've been working on for 2 years. the MVP version is insanely popular."
-- only after digging into the repo for 10 minutes, with domain expertise, did I realize it was a complete Potemkin village, built by Claude. And even then, I was afraid to post something pointing this out because it required domain expertise, and it could have read as negative rather than principled.
All that to say, some subsets of The AI Poster Problem now require having intimate domain expertise and 10 minutes to evaluate it. :/
Additionally, the Claude 4.6s and GPT-5.4s are better than me at posting on HN now. :/ And I've been here 16 years. The past couple days, any comment I write involving some sort of judgement or argument is by Opus 4.6 or GPT-5.4, via: 1) dump HN post into prompt 2) say "I feel $X about this, write me an HN post that communicates this but not negatively".
I'm a little ashamed to admit if you look through my post history, you'll definitely see a repeated pattern over 16 years of someone who is very negative and has a hard time communicating it constructively. They're smart enough now to extrapolate observations in the way I want to, while avoiding my own tarpits.
And beware of what's already in context. Sometimes ideas that seem obvious given antecedents are not so obvious when taken in isolation.
Genuine innovation is what we most want to encourage. That's what Show HN has always been about.
The problem now is that coding assistants have dramatically lowered the bar for getting a product or tool working, without the need for much innovation. We need new ways of identifying projects that are genuinely innovative so that their creators can be fairly rewarded, rather than being drowned out.
(edit: And thus such bots can't easily discover that they shouldn't post, afaict)