Ask HN: How do we solve the bot flooding problem without destroying anonymity?
9 points
12 hours ago
| 10 comments
| HN
AI posts are becoming indistinguishable from human posts, and we can see it here on HN. The conventional response by website operators is to put in progressively tighter verification systems to distinguish bots and humans, but that eventually leads to the end of anonymity.

This is not an anti-AI rant. If a future AI agent truly has high quality posts and wants to use the site normally, that's fine. I'm talking about spam campaigns with hundreds of new accounts. We need new solutions to this problem.

I'll start by proposing a solution that could work for HN and similar forums. Feel free to iterate on it or propose your different ideas in the comments. Here goes:

For logged-in users, instead of ranking posts and comments on the server-side, the server only delivers a chronological feed + the current logged-in user's voting history.

Using the chronological feed as the base, each of your past votes changes the ranking of your feed by a tiny bit, and that's calculated client-side. You're more likely to see posts and comments from users you've upvoted in the past at the top.

In short, this means a new account will see a completely chronological feed, while an established account will see a feed modified by only their own past votes.

The public feed for non-logged-in users would still be ranked by the server. No changes there.

So each user gets a fully personalized bubble when logged in, except it's not a bubble because n=1. And it's really easy to break out of the bubble by logging out.

Spam bots can post and vote all they want, but they won't change the core userbase's experience that much, because the bots will only have access to a chronological feed. It has no taste, which is accumulated over time, and therefore can't spam votes and replies on real conversations nearly as much.

youniverse
35 minutes ago
[-]
My mind goes to simple solutions like established communities having a $1 entry fee, for privacy use a privacy crypto maybe but that's a decent amount of friction for average folk with the current UX.

Another interesting idea that comes to mind is that every post/comment made needs the user to physically use their fingerprint scanner on their device which I assume plenty of devices have already. As long ad it can't be spoofed it works but not sure about the details about reliably securing that.

It would be some friction but I feel like it would be fine?

reply
allinonetools_
8 hours ago
[-]
The biggest signal I have noticed over time is consistency, not just one good post. Accounts that participate normally for weeks build a kind of trust naturally. Maybe weighting activity history more than identity verification could help without hurting anonymity.
reply
throwaway5465
8 hours ago
[-]
Creates echo chambers, karma whoring 'power' accounts, rewards ego-posting and generally makes the experience about who says what not what is said. Worsens the problem.
reply
judahmeek
4 hours ago
[-]
Echo chambers exist no matter what & "who says what" is an essential aspect of determining transferrable credibility.

Without transferrable credibility, any ratings system simply becomes a question of which side spams the most.

reply
fernando_campos
10 hours ago
[-]
One issue I keep noticing is that most anti-bot systems optimize for blocking instead of increasing friction progressively.

Rate limits tied to behavioral patterns rather than identity seem to work better — especially interaction timing, navigation flow, or session consistency.

We experimented with something similar while building HiveHQ and found bots usually fail when systems require small contextual actions humans do naturally.

reply
judahmeek
4 hours ago
[-]
So... use advanced pattern matching to determine human patterns & reject outliers?

Interaction timing is like rate limiting, but more granular

Navigation flow is a basically requiring bots to use a headless browser instead of API's

What does session consistency mean in this context? Restricting to a limited number of interests & activity times?

reply
austin-cheney
12 hours ago
[-]
Read this comment and use the script in the linked subject:

https://news.ycombinator.com/item?id=47203918

reply
bruceyao1984
9 hours ago
[-]
could solve bot flooding by raising the cost of automation, not by removing anonymity. Techniques like behavioral detection, rate limiting, proof‑of‑work, reputation systems, and AI‑based anomaly detection can filter bots without requiring real‑world identity. The goal isn’t to know who you are — it’s to know whether you’re human.
reply
apothegm
27 minutes ago
[-]
One of my least favorite patterns online: sites that decide that I’m a bot because I open a whole bunch of tabs in the space of 15 seconds with the products I want to evaluate or articles I want to read.
reply
chistev
8 hours ago
[-]
Haven't noticed any negative changes.
reply
judahmeek
3 hours ago
[-]
OP, you're on the right track.

The question you need to ask yourself is "What's the end game?"

What happens when users' feeds are full of users that they already know?

You think they'll be satisfied with that?

reply
drsalt
10 hours ago
[-]
make spamming illegal give severe punishments and enforce the law
reply
hash07e
3 hours ago
[-]
There is no way to solve it without going to tribalism.

Bots and AI right now as good as the "average" joe.

All the places that can move the perception of real people about products, politics or any form of power will and is being flooded with bots.

The reason for the "ID" on internet is not because of the children. But because the bots are soo good they need to use ID to filter what is bot or not. Avoiding the dead internet.

The powers that be NEED to sway perception and narrative to their liking.

think about the children! Epstein list, patriot act, etc.

reply