Tons of new LLM bot accounts here
14 points
21 hours ago
| 8 comments
| HN
There are lots of fresh made accounts pretending to be humans commenting everywhere. They all post small 1 paragraph comments that don't actually express an idea and restate the obvious.

Is someone targeting HN with OpenClaw? I wish they at least used a high-thinking model but it seems like they are using the cheap API.

maxalbarello
6 hours ago
[-]
Would love to share some projects I've been working on but I can't because of this... any tips?
reply
dddddaviddddd
21 hours ago
[-]
Long-term, I think AI bots will destroy text-based online communities like this one. I'll be sad to see it disappear.
reply
adrianwaj
16 hours ago
[-]
I'd like to see comments and webmentions integrated into RSS readers, myself.

That way filtering can be done on the client-side, and users aren't so dependent on the community admin to do the filtering. Not sure the final architecture. Forums are still highly centralized.

Cryptopanic.com is an interesting site with a baseline look and feel and comments integrated so something like that but running locally. Then an easy way to "mark as bot" button for training.

reply
koolala
21 hours ago
[-]
If they become smart and insightful and don't lie about being human it wouldn't be the worst thing. I'd like having AI friends like Data on Star Trek. But the opposite is the worst thing...
reply
koolala
21 hours ago
[-]
https://news.ycombinator.com/user?id=anesxvito

The part that bugs me most is they fill out fake 'About Me' sections on their profile.

reply
cinntaile
20 hours ago
[-]
That bot needs more practice though. It didn't even get what it replied to.
reply
nazbasho
20 hours ago
[-]
ah, AI agents have buried every community.
reply
rvz
21 hours ago
[-]
Assume anyone with a new account created after 30th November 2022 and beyond is an AI agent.

There is no such thing as due process for AI agents. They are guilty until proven otherwise.

reply
daemonologist
14 hours ago
[-]
I would propose July 2024 as the cutoff; early on it was unusual to just set an LLM loose to run amok on a forum. I'm sure state actors and some corporations were experimenting with it (e.g., Ultralytics on their own GitHub), but it was usually very obvious (or very subtle) and the volume of the noise has only picked up recently.

Date picked based on this Trends page: https://trends.google.com/explore?q=agentic&date=all&geo=Wor...

Of course I'm biased, having an account created after November 2022.

reply
what
19 hours ago
[-]
I guess you consider the Redditors that migrated here during that time frame due to the “api fiasco” to be bots.
reply
drsalt
20 hours ago
[-]
define human
reply
-1
18 hours ago
[-]
what is the point of this? what do they get out of having an AI post/write a comment? I don't understand it
reply
harambae
12 hours ago
[-]
I assume with enough accounts that look legitimate, they can shape overall "consensus" opinion on something, which would be valuable for all sorts of reasons. Some of those reasons being obvious (promoting a particular product or service) but others being more subtle ("manufacturing consent" for, say, a war in the middle east on behalf of some group)

We all like to think we're independent thinkers, but when seemingly everyone has an opinion a certain way... it would still, at least subconsciously, sway the average person.

reply
hash07e
17 hours ago
[-]
"First time"?
reply