I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
The vast majority of sub forums however are more targeted and smaller to begin with.
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
I am a big proponent of (direct) democracy in general.
You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters
Are you sure? My understanding is that accounts were only allowed to create two communities.
That limit wouldn't stop you creating more communities with more accounts anyway.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
Perhaps not the worst thing in the world?
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).
Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.
You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.
This would nuke spam economics and be minimally disruptive for other use cases of email IMO…
These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
Which would be totally fine with me TBH.
Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!
I got encouraged by another HN poster a few days ago, let me know if you have any suggestions.
I’m always open to criticism.
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
The internet archive is my safe haven these days, i can go back and remember the old internet.
There was a lot in the new digg that I was concerned or at least not optimistic about but come on - are we even going to try anymore?
Two months, according to The Verge.
https://www.theverge.com/tech/894803/digg-beta-shutdown-layo...
This is particularly embarrassing since from what I recall they were all in on AI with the new website, so to shut it down so fast because of it…
Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.
They could at least put it in read-only mode for a short time and allow downloading of extant community content prior to a scheduled "reset day".
This smacks of flailing leadership and zero respect for their target user demographic.
The only sustained business I'm aware of is Hodinkee.
From what I can tell Watchville was abandoned a few years ago.
Their plan is to make the internet what is was 22 years ago.
I'm sure it's impossible, but what if it's not?
Example: https://0x0.st/8RmU.png
I use mander.xyz, it's science focused, but they also have a policy of only de-federating instances that host CSAM.
Their /instances page also only shows a single blocked instance, whereas something like programming.dev shows lots of questionable instances blocked.
If you're telling me it's _worse_ than reddit in this regard, I can only imagine how terrible it is.
Next time try doing it in a way that you control it.
My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".
I can see why the team got overwhelmed. I wouldn't want to have to deal with that.
Digg.com Is Back - https://news.ycombinator.com/item?id=46671181 - Jan 2026 (10 comments)
Digg.com relaunch public beta is live - https://news.ycombinator.com/item?id=46623390 - Jan 2026 (18 comments)
Digg.com (Relaunch) - https://news.ycombinator.com/item?id=46524806 - Jan 2026 (3 comments)
Digg.com is back - https://news.ycombinator.com/item?id=44963430 - Aug 2025 (204 comments)
Digg is trying to come back from the dead with a reboot - https://news.ycombinator.com/item?id=43812384 - April 2025 (0 comments)
(context so people don't have to click links)
Damn, that didn't take long at all...
There are subreddits within Reddit such as https://www.reddit.com/r/neutralnews/ that have strict rules around sourcing, etc. However, I think that’s not what most users want, and may not be quite what you’re looking for either, apologies.
In the same way people want to be fit.
There are 3 horsemen of Internet forums, one of them is topics with a low barrier to entry.
At that point anyone can speak up, and their opinion takes up as much screen real estate and reading time (often less reading time) than a truly informed take.
By putting effort barriers in place, it forces a fitness test that most users (and bots) fail.
Another subreddit which has strong rules is r/badeconomics. I didn’t know about neutralnews, so thank you for giving me another example to add to the list.
Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.
- Users don't have to pay to post links/stories - Users have to pay to comment on links/stories - Users have to pay to "upvote" comments. Downvotes don't exist - Each link "lives" a certain amount of time before it is locked. - After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
If this were to exist today, I know I would be incredibly critical of it.
I kind of expected this. The way some of these people work, if the site isn't an instant unicorn, it's trash. But if the goal is a good community, that is something that takes time to build and should grow slow. The incentives are all backward.
It was fine, people talked about work, personal stuff, travel, until one person posted about their disappointment that their state was limiting various services or rights to gay people. For them this meant their rights were in question and they were understandably upset.
Immediately some folks cried politics and that they shouldn’t post about that sort of thing.
To the user posting it it was about their life…
I don’t think “no politics” rules really make much sense. For someone it’s more than politics, and IMO because a topic is touched by politicians or government shouldn’t make it disallowed.
You thinking that astroturfing only happens for US politics is dangerously naive.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
I know they claim to care about the bot problem, but they appear at absolute best incredibly complacent about it, if not complicit. All those OnlyFans spammers, AI spam bots, etc. are engagement. They are ruining the platform for people, but engagement figures don’t distinguish between fake engagement and real people. The outcome of their current behaviour is for engagement to steadily rise while the value to real people steadily falls. It’s like they want to be the poster child for Dead Internet Theory.
https://www.businessinsider.com/reddit-ceo-platform-most-hum...
I guess that in an ocean of upvote-based platforms, an island of hand-picked content was a welcome change -- at least for me.
The move (back) to a reddit-like site never made sense to me. Hopefully what comes next has real value to the users.
I'm a bit surprised with Alexis' involvement they didn't anticipate the bot problem. Alexis left reddit several years ago but I'm sure he's still in touch with the folks who run the place. It would've been worth it to talk to them about the threats they currently face and how they deal with them.
Am I completely off base or did they use AI to write the post complaining about AI?
Digg isn't just here again. It's gone again.
The LLM style is like nails down a blackboard, are people blind to it or do they just not even read the stuff they're posting?
I was an avid Slashdot user way back in the day, but the site was basically the same throughout the day, and I wanted faster updates. Digg did this perfectly for a time, but eventually I migrated entirely to Reddit (even before whatever that drama was that caused a big exodus from Digg).
I think Reddit right now is the sweet spot: up to date information, longer-term articles to read, and easy to catch up on things I missed. I was recently pressured to sign up for X (or Twitter or whatever), and I had to turn off all of the notifications since I was constantly spammed with "BREAKING: X RESPONDS TO Y ABOUT Z!!!!"
Right now having Reddit for scrolling and Hackernews for articles+discussion feels like it works for me.
There are decent small communities I'm a part of but the trash feels like it is encroaching.
And the notifications you describe are exactly reddit's notifications? "your comment received 10/20/50/100 upvotes!" "x responds to y about z" "News is trending"
We really need some way to "verify as human" in the next coming years.
I don't believe there is any practical way to do it.
Sure, there are ways to verify a human linked to a specific account exists in a one-off fashion, but for individual interactions you'll never know that it isn't an LLM reading and posting if they put even a small amount of effort to make it seem humanish.
I suppose bots could find forums that use the most popular software and still make accounts and spam, but it would be much more obvious and less fruitful for someone to spam deck builders in Vancouver (something I saw often on Digg) on a forum that is focused on aquariums owners in the midwest.
i really enjoyed the new digg
I don’t understand what kind of shenanigans transpired. But it seems there’s more to in than “bots”
If it truly is bots, maybe a private invite only social network is the way to go.
Thanks for the fun this past year Digg.
Ironic, they use AI in their shutdown post that blames AI.
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
> We know how frustrating this is, and we hope you'll give us another look once we have something to show, we’ll save your usernames!
I think it's partly human. But ex:
> Network effects aren't just a moat, they're a wall.
isn't a natural sentence.
> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.
Could it be generated? Sure. But there aren't the obvious tells you act like there are.
"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."
It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.
So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:
Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).
It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.
Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'
Also werent all "moats" commonly paired with a wall in real life? As in a moat around a castle wall?
In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.
I don't care so much about Digg, but the endless "haha, I caught you!" comments annoy me more than the rare actual AI-written content they label.
(Where do you think AI picked up its writing habits from?)
I think the HN title needs adjusted
No you can't visit.
Dead internet theory confirmed, Digg the latest victim
The only website which became totally useless for me after the general availability of LLMs is OkCupid. It's indeed dead. The rest are fine.
What am I doing differently compared to everyone else?
I'm regularly using: telegram, whatsapp, wechat, hackernews, lobsters, reddit, opennet.ru, vk.com, pornhub, youtube, odysee, libera.chat, arxiv, gmail, github, gitlab, sourcehut, codeberg, thepiratebay, rutracker, Anna's archive, xda-developers.
facebook and twitter became broken for me, but not because of bots, rather because of the "smart feed" ("the algorithm"), which is hiding all posts of my friends and promotes incendiary garbage.
In other words, I am seeing enshittification full-scale, but not the bots.
YouTube comment sections are botted.
100% that entire page was written by an LLM. So fucking obvious and I’m so tired of reading the same awful writing style with all these corporate spiel rants. If you don’t care enough to write something yourself, just don’t even bother.
Hmm...
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall.
What does this even mean? How many metaphors can it mix up in one paragraph? Can't they write a blog post the old fashioned way, with feeling? Imagine reading a corporate blog post about being laid off which the founder couldn't even be bothered to write.
Amazing how close to corporate newspeak chatgpt can get (prompt was the headings of this blog post), it has the same sort of blank say-nothing feeling of this blog post: https://chatgpt.com/s/t_69b4890e54ac819193f221351ea900a7
Step 1: Copy Reddit
Step 2: ?
Step 3: Profit!
If they relaunch, I hope they develop something integrated with the fediverse. I believe the time to build walled gardens is over, plugging with the fediverse might give them a running start to build something g together with the wide fediverse community, maybe something easier to use for non-techies and well moderated.
We will see I guess…
What's an actual viable solution to this kind of thing?
CATPCHAs aren't it. Maybe micro-fees to actually post things would discourage bot posting? I really don't know.
Seems like it's just dead internet all over the place these days.
This. So much This.
And I will continue to die on the will die on the hill that Reddit only survived/became "successful" because of the legendary Digg slip up and exodus. Alexis Ohanian still doesn't seem to have any clue that it was right-place-right-time and Kevin Rose seems to have not learned much either. Can we stop giving either anymore credibility? As with any social site it's the user base/community that helps pull thru darkness. And no one was really asking for this.
Let sleeping dogs lie.
I wasn't a digg user, but this was done to combat 'voting rings' (bots), and the reddit migration was memed partially because it was/is far more open to manipulation. So at least their principles have been somewhat consistent.