EU to crack down on TikTok, Instagram's 'addictive design' targeting kids
437 points
by thm
8 hours ago
| 41 comments
| cnbc.com
| HN
conception
7 hours ago
[-]
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
reply
Aurornis
6 hours ago
[-]
> If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present

Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.

A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.

reply
another-dave
6 hours ago
[-]
Goes away, or is liable for the content promoted to the frontpage under the OP's take?

But I'd agree, that it's personalisation rather than just curation that's the issue.

I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.

As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations

reply
Aurornis
6 hours ago
[-]
> Goes away, or is liable for the content promoted to the frontpage under the OP's take?

Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.

It’s an obvious backdoor play to make sites go away. If a site becomes liable for content posted, you cannot allow users to post content without having the site review and take responsibility for every comment and every post.

The people proposing it haven’t considered how damaging that would be for the ability of individuals to share ideas and their content. When every site with “an algorithm” is liable for content posted, nobody is going to allow you to post something. It’s back to only reading content produced and curated by companies for us. Total own-goal for the individual internet user.

reply
SoftTalker
3 hours ago
[-]
I think you could finesse it by saying that on HN, the users submit the content and the users also determine (by voting) what is popular. Ycombinator doesn't promote or bury any particular post with their own algorithms; they don't exercise any editorial review or control. (I don't think that's exactly true today, but it could be).

But to the larger point, I would actuall agree that sites should "review and take responsibility for every comment and every post." They are the ones amplifying and distributing this content, why should they have zero responsibility for it?

Yes that would dramatically change what gets published online, but I think that would be a good thing.

reply
pibaker
2 hours ago
[-]
And how do you think any other website decides what to recommend you, if not other users' actions? Remember the Netflix prize? The data set they gave you is how other people rated movies. You can absolutely build a recommendation system without manual input from the operator.

And HN absolutely does promote submissions at the moderators' discretion. The moderators sometimes give old but overlooked submissions a second chance, they also turn the flamewar detector on some stories that they think deserve more attention which effectively promotes them against users's will.

reply
AlecSchueler
1 hour ago
[-]
> users also determine (by voting) what is popular

The algorithm considers various other things such the ratio of votes to comments, age of the post etc.

Just compare how different the front page is to /active

> Ycombinator doesn't promote or bury any particular post with their own algorithms

Certain things do get put above the popular stuff if they're fresh enough and your account is deemed to be a taste setter.

> they don't exercise any editorial review or control.

They can decide things like overturning the flagging of a post or burying something even without the flag etc.

reply
fc417fc802
1 hour ago
[-]
Importantly all except one of those things is impartial to the user, and even that one is merely binning based on a single category. Algorithm here is a red herring IMO people are objecting to a couple fairly specific things. One being personalization carried out by the other party, the other designs that introduce partisanship or are detrimental to the end user (ie addiction and other dark patterns).
reply
charcircuit
2 hours ago
[-]
And on TikTok users vote what is popular by giving videos watch time. It is no different.
reply
fc417fc802
1 hour ago
[-]
Is TikTok really so straightforward? I don't believe your assertion is correct but I'm open to evidence.
reply
charcircuit
1 hour ago
[-]
The main difference is that HN uses time to segregate cohorts and TikTok uses interests to segregate cohorts. If enough people within these cohorts upvote / give watch time then the content is shown to more cohorts.
reply
fc417fc802
1 hour ago
[-]
I understand the basic principle. Clearly that's one of the inputs. What I'm questioning is your implied assertion that there's nothing else to it.

I don't for a second believe that tiktok (or facebook or any of the others) employs a primitive algorithm that impartially orders results based on a simple and straightforward metric without consideration for their own interests.

reply
andrewjf
6 hours ago
[-]
I agree with what OOP said. But it’s not my intent to “shut sites down.” I have this view to try to increase diversity of media consumption and break people out of echo chambers. If your business model is so shit you have to exploit weaknesses in human brains to keep people viewing ads and can’t adapt, then that’s your problem.

If you have an algorithm whose sole purpose is to “engagement” with your own platform (by intentionally and purposely pushing clickbait, ragebait, and media that keeps reinforcing your clicks) you should no longer get section 230 protections - you are no longer a neutral party. These algorithms exist to create echo chambers and keep you clicking so you can consume more ads.

I would love to hear other ways of solving the problems of social media.

reply
Aurornis
4 hours ago
[-]
> I have this view to try to increase diversity of media consumption and break people out of echo chambers.

Making sites liable for all user-posted content would do the reverse of this. Every platform that lets people submit content would have to stop doing that, because it’s an impossible liability to manage.

You’d have to host your own site. You wouldn’t be able to share anything about it on a social media site because its user-generated content. No visitors unless you advertise it through paid contracts with companies that can review it and decide to accept the liability.

reply
ryandrake
2 hours ago
[-]
Newspaper "Letters to the Editor" manage to do this. Users "submit" things to the newspaper, the editor curates and decides what to keep and what not to, and then the newspaper publishes the user generated content. Just like social media: Users submit things to the site, TheAlgorithm curates and decides what to keep and what not to, and then the site publishes the user generated content.

If web sites and social media can't "scale" to do this, then maybe they should scale down. "Making sites liable for all user-posted content" would not kill social media, but would definitely scope it down to what can be effectively curated.

reply
throwaway902984
1 hour ago
[-]
I don't think there are enough dangs to effectively curate much of the internet, and scaling it back by how much would be the result? 95%? That is before settling on definitions of effectively curate I suppose.
reply
fc417fc802
58 minutes ago
[-]
"Effectively curate" here simply means "willing to take legal responsibility for" (although in practice I assume there would be an insurance policy involved because that's just how things are done).
reply
fc417fc802
54 minutes ago
[-]
I notice that parent describes "engagement" algorithms and you somehow jump to "all sites". So I think we'd see "engagement" algorithms disappear and very primitive approaches with prominent transparency measures in place would replace them. I expect we'd all be better off were that to happen.
reply
freejazz
2 hours ago
[-]
>Every platform that lets people submit content would have to stop doing that, because it’s an impossible liability to manage.

This is a huge assumption that is offered constantly, and always, without any evidence at all.

reply
throwaway902984
1 hour ago
[-]
"letters to the editor" curated by employees would become a part of their business model and regular contributions would go away? Why would that assumption be incorrect? I wouldn't run a website where a casual user having a moment could result in my imprisonment. I would only allow non-lbtq content that didn't mention race or immigration, as the chilling effect there is real. A DA would for sure come after me if my site became influential.
reply
weregiraffe
2 hours ago
[-]
> It’s back to only reading content produced and curated by companies for us

I didn’t know only companies can have websites.

reply
buu700
38 minutes ago
[-]
It's a matter of resources, not corporate status per se. For better or for worse, the current status quo largely democratizes content promotion. You and I can post these two comments here and put our ideas and names in front of a bunch of strangers for $0.

In a world where the risk-adjusted cost of allowing third-party comments on your platform shoots up, someone has to pay that cost. A personal blog hosted on your server might struggle to find any significant reach without a real advertising budget, because distributing speech/content that promotes your platform would no longer be ~free.

I don't necessarily believe that the major social media platforms would fully evaporate, but I'd expect some or all of these changes across the ecosystem:

* Massively scaled up LLM-based moderation/censorship.

* Replacement of direct user content posting with an LLM-based interface (to chat with an LLM about what you want it to write on your behalf).

* Payment-gated public posting, e.g. monthly or per-post fees to cover liability/insurance and/or LLM inference costs. Possibly higher fees for direct authorship vs LLM pair posting.

* Massive rise in adoption of decentralized architectures, either via current mainstream platforms if legally tolerated or via anonymous dark web platforms otherwise. Maybe Tor becomes as normalized as VPNs, or maybe the Western legal environment shifts hard against general-purpose computing.

I understand where this sentiment is coming from, but I think it's taking a lot of the current status quo for granted. What you guys are proposing isn't necessarily a targeted change that would simply make bad guys stop doing bad things. It's more likely a massive structural change that would dramatically alter the social and economic fabric of the internet as we know it, and not in a way that most of us would like.

reply
freejazz
2 hours ago
[-]
>Same thing. There is no Hacker News if Y Combinator becomes liable for user submitted content.

Why is this assumed to be true?

reply
NewsaHackO
2 hours ago
[-]
If YCombinator has to officially approve every article submitted, then it will become a publisher of a news site, not a social media site. Essentially, it would be a New York Times site with unpaid writers.
reply
freejazz
2 hours ago
[-]
And? The New York Times website exists, last I checked.
reply
NewsaHackO
2 hours ago
[-]
I guess I am not seeing your point. A site that is completely a blank page exists also.
reply
freejazz
2 hours ago
[-]
Well the argument was that Hackernews would no longer exist, and I asked why and your response was that it would be like the NY Times, but the NY Times website does exist so I don't understand what point you are trying to make then.
reply
NewsaHackO
2 hours ago
[-]
Got it. If the page doesn't fulfill the original purpose that people wanted to go to it, it ceases being interesting. The fact that the page merely exists is meaningless, much like a blank website.
reply
buellerbueller
6 hours ago
[-]
>It’s an obvious backdoor play to make sites go away.

Oh no.

reply
tencentshill
6 hours ago
[-]
The algorithm is not personalized. It's the same for every user. No issue there.
reply
tolerance
5 hours ago
[-]
But still an algorithm. The difference is that we (at least some of us) place a greater trust in the integrity behind how information surfaces on HN. I think that some parts of it are open source, and the moderators are transparent enough about what isn't public + there is a mix of folk knowledge that explains how HN works under the hood.

Depersonalized algorithms or recommender systems aren't inherently better than personalized ones. HN is an exceptional example of the former but I think at scale people would come up with a different crop of complaints for them.

reply
tencentshill
5 hours ago
[-]
Yes it's still an algorithm. Cable TV programming is another example. Everyone sees the same content. The ads are changed at the local broadcaster level but are not tailored to the individual, and are not harmful in the ways the EU is regulating. If anything, everyone watching the same thing is good for social cohesion. Everyone discusses the latest TV episode the next day at the office.
reply
tolerance
5 hours ago
[-]
Right. Withholding the fact that cable television doesn't appear to be the typical distribution method anymore, how do broadcasters select/schedule their programming?
reply
fc417fc802
48 minutes ago
[-]
What's your point? It seems like you're pedantically focusing on a single word without regard for the actual meaning of the broader statement. No one is proposing to regulate things done in the traditional manner of cable tv, nor other uniform and impartial approaches.
reply
AlecSchueler
1 hour ago
[-]
> It's the same for every user

It isn't. Users who vote and flag more often are more likely to have things from /new surfaced on their main page for example.

reply
jimbob45
2 hours ago
[-]
And also the algorithm here is title-blind. The content of the story bears no sway over its place in the rankings. I do not believe dang cherry-picks either except for the very rare sticky?
reply
AlecSchueler
1 hour ago
[-]
> the algorithm here is title-blind

It's that true? I thought I had seen it said that there were keyword penalties to discourage things like political posts that could be turned off and on

reply
achenet
5 hours ago
[-]
The facebook/meta algo might be same for all users, but it had different inputs for each user.

HN, on the other hand, everyone has the same front page. If I like a post I can favorite it to 'bookmark' it, but HN won't modify my front page based on what I favorite, whereas facebook will.

I think the GP's argument is, when it comes to social media, "one size fits all" might be less addictive than "custom made" :)

reply
tencentshill
3 hours ago
[-]
Thats what I was saying. Referring to HN, not facebook.
reply
alkonaut
4 hours ago
[-]
> Hacker News is a site that presents data by algorithm

Does it though? I mean by "algorithm" in this context we mean "personalized algorithm meant to maximize engagement and retention".

Not e.g. "sort by upvotes and decay by time" or even "filter content based on coarse user location".

Does HN show me a different front page than everyone else based on which articles I have read or upvoted? That would make me feel worse about the site because I don't want a personalized HN feed I want to read what everyone else is reading (which is incidentally why I refuse to give up linear TV).

reply
Aurornis
4 hours ago
[-]
> Does it though? I mean by "algorithm" in this context we mean "personalized algorithm meant to maximize engagement and retention".

I addressed that in the second half of my comment already.

But yes, HN qualifies as a site that displays by algorithm. If you mean personalized recommendation algorithm then it’s important to call that out. The last thing we want is regulation so broad that it catches every site that ranks things.

reply
alkonaut
3 hours ago
[-]
No one _ever_ even considers "algorithms" in the CS sense here (such as "sorting"), and even bringing that notion up would be deliberately dumbing down the discussion (yet it keeps happening in this thread over-and-over-again because people are for some reason very "well ackshually sorting is an algorithm").

"Algorithm" in this context is very clear what it is. It is not what the word means in Computer Science or in general. Just from the context and without any clarification needed "algorithms" in social media means "addictive personalized feeds".

reply
ryandrake
2 hours ago
[-]
I think we need a different word, so that Computer Science grads stop getting wrapped around this axle. We're obviously not talking about Quicksort when we're talking about social media algorithms and other recommendation/discovery algorithms. Heck if I know what that word would be.
reply
addaon
19 minutes ago
[-]
We can call them crypto. Won't make any less sense than the current usages, and it's a really good indicator to stop listening.
reply
alkonaut
57 minutes ago
[-]
Yes absolutely. Sadly I think that ship has sailed. Now if you ask 100 people in the street what "algorithms" are, I bet a majority among those who answer anything at all will answer it's something related to evil social media corporations.
reply
AlecSchueler
1 hour ago
[-]
Personalised algorithms.
reply
xigoi
5 hours ago
[-]
> Under your definition, Hacker News goes away, too.

It doesn’t have to go away, just switch to chronological sorting.

reply
Analemma_
5 hours ago
[-]
Have you ever browsed by New and seen the firehose of shit which doesn’t make it to the front page? HN sorted by new is effectively useless and you might as well shut the site down at that point.

“Chronological only” might work for something like Twitter where you’re choosing to follow specific individuals to see their posts, it can’t work for curation sites like HN/Reddit.

reply
xigoi
5 hours ago
[-]
That could be solved by allowing users to filter by score or number of comments.
reply
throwaway902984
3 hours ago
[-]
Which would lead to everyone having their own personalized front page? Not controlled by dang so much but still.
reply
xigoi
2 hours ago
[-]
Not being controlled by the website owner is the point.
reply
throwaway902984
1 hour ago
[-]
Yeah for sure, I see what you were saying. Changing that part might not achieve the desired effect though is what I was saying. Context dependent on the site here of course, but in a general sense I could see meta et al. being nonplussed by this to a significant extent.
reply
veeti
3 hours ago
[-]
Sorting is not an algorithm?
reply
xigoi
3 hours ago
[-]
Not what people are talking about when complaining about algorithmic feeds.
reply
kjkjadksj
1 hour ago
[-]
Hacker news front page would go away but not new or any top ranking. It would be nice to have a hn without the second chance queue imo.
reply
dangus
2 hours ago
[-]
The difference is you can’t prove that hacker news has a bunch of psychologists on staff who are dreaming up ways to make the website addictive.

If you take TikTok to court and go through discovery you’re going to find internal communications of people talking about ways to get people to stay on the app longer, ways to make the content more addictive, ways to maximize ad reach, etc.

Hacker news just tossed a simple upvote downvote system and called it a day.

Plus it has no endless scroll, no graphics at all, limits your comment frequency, has no push notifications, etc.

reply
jackdoe
6 hours ago
[-]
> Hacker News goes away, too.

so be it.

reply
vasco
6 hours ago
[-]
This is a strange thing to comment on HN. If you truly believed it why would you be here?
reply
buellerbueller
6 hours ago
[-]
The majority of terminally addicted people I have interacted with at length have both recognized the terminal nature of their addiction and been unable to do anything about it.

That's the nature of addiction.

reply
coffeefirst
4 hours ago
[-]
Honestly the damage done by TikTok et al is so severe that I’m okay with a little collateral damage. We will build new things.

But I also see no reason you can’t separate out forums with upvoting from the personalized engagement optimized feed. They are fundamentally different designs. (In other words, Subreddits are safe, the Reddit homepage is regulated unless it changes.)

reply
schnitzelstoat
6 hours ago
[-]
So the user opens the app - what is the first video you show them? How does 'the user decide' from the millions upon millions of videos there are?

If the user can search like in Youtube then how do you rank the results? That's also an algorithm.

It isn't pretty easy to solve at all.

reply
alkonaut
6 hours ago
[-]
In the case of Instagram: You show the videos from the people you follow on instagram, then no more short videos at all. Possibly a search box.

If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.

For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).

reply
Aurornis
6 hours ago
[-]
I think it would be great if all of these apps had an option to function like you propose: Your feed is a simple view of people you’ve chosen to follow. The end.

Then all of the people who have trouble with self-control on infinite feeds can enable this mode, and everyone who wants the recommendation algorithm can leave it on.

This is the optimal outcome that actually serves everyone’s personal goals for using these platforms. If we get into a conversation where some are demanding we don’t allow anyone to use a recommendation algorithm because they feel the need to control what other people see, that’s a different conversation. That conversation usually reveals other motives, like when people defend the algorithm sites they view (Hacker News, Reddit, whatever) but targets sites they don’t like TikTok.

reply
gibspaulding
5 hours ago
[-]
I don’t endorse using these apps, but for what it’s worth, Instagram actually does have this feature (tap “instagram” at the top and select “following”). You get a chronological feed with no adds and no reels. Of course they don’t provide an option to make that the default as far as I know.
reply
alkonaut
4 hours ago
[-]
Yup so all they need to do is only allow that content feed for anyone under X years in some specific countries. Seems like they'll survive this, and it won't even be very expensive to fix.
reply
Aurornis
4 hours ago
[-]
Reminder that any regulation that depends on age is a trigger forcing ID checks for everyone.

You can’t put a restriction on people under X years without gathering information about everyone’s age. You can’t confirm everyone’s age without some ID check. You can’t do an ID check based on anonymous tokens (too easily shared) so every age check mechanism has some ID revealing step, either to the company or to a 3rd party like a government entity (which will pinky swear they’re not looking at the data).

reply
hapticmonkey
6 hours ago
[-]
Instagram and Facebook both have such features. They’re hidden, though. With Instagram you tap the logo in the top middle of the app and choose “Following”. With Facebook it’s hidden away under the “Feeds” section in the app.

I’d love for there to be an option to have them as default. It’s obvious ($$$) why they won’t do that unless forced to by regulators.

reply
SlinkyOnStairs
3 hours ago
[-]
> I think it would be great if all of these apps had an option to function like you propose: Your feed is a simple view of people you’ve chosen to follow. The end.

This is something EU regulation requires them. Earlier this year the Dutch courts ruled as such, all the way up to appeal. It's just a matter of time before other European courts repeat this ruling.

https://www.reuters.com/legal/litigation/dutch-court-upholds...

reply
naravara
5 hours ago
[-]
Why do you assume the recommendation algorithm should be the default? The algorithm is the dangerous thing, THAT should be the opt-in mode not the other way around.

IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.

reply
Aurornis
5 hours ago
[-]
Sure, if that makes the angry mob happy then let’s make it default. Then every new user can click the button once and be back to the app they expect.

> IMO they should not only be opt-in, but should actually be required to publicly list the parameters and weights they’re using and allow users to tune those weights.

I wonder how many people here know that many of the popular apps have rolled out finer controls for recommendation algorithms so you can do this. On Instagram you can go in and see the topics your recommendation algorithm picked up and modify them manually if you like.

I think the goalposts will just continue to move, though.

reply
naravara
5 hours ago
[-]
No they should have to pick every time whether they want to be in follower mode or discovery mode. Dismissing concerns as “the angry mob” is richly ironic considering the entire objection is that recommendation algorithms seem precisely tuned to foster angry mob dynamics. So yeah it will make the angry mob happy because it will be removing the primary mechanism for inciting angry mobs.

People here know that they have finer controls (which are still not actually that fine and also don’t really make the parameters auditable). The problem is these settings are hidden away in places most people will never look. And also, I stress again, none of this is actually auditable because they treat these as some kind of trade secret special sauce and there’s really no reason society should feel obligated to support or enable this business model.

reply
Aurornis
4 hours ago
[-]
> considering the entire objection is that recommendation algorithms seem precisely tuned to foster angry mob dynamics.

That actually wasn’t the objection in the article we’re discussing at all.

The objection is that recommendation algorithms show people more content they want to view, which leads vulnerable people (kids in this case) to consume more content.

reply
naravara
3 hours ago
[-]
More of what they want to view by showing a feed of largely inflammatory content that targets easily risible emotions that encourage commenting and interacting, which makes people keep coming back to argue more. The mob dynamics are part of the addiction loop.
reply
HPsquared
6 hours ago
[-]
There is no going back to the 2010 internet unless you confiscate everyone's phones.
reply
alkonaut
6 hours ago
[-]
Not sure what confiscation would accomplish that regulation couldn’t? I mean we’re all aware that if regulators target TikTok then a new app would pop up and take its place.

But the thing about regulation is that it doesn’t need to be water tight. You can just target a small handful of large players and it will improve the situation in practice. It doesn’t matter if 998/1000 apps use addictive feeds if the largest two apps don’t and they have 90% of users/views.

reply
Aurornis
5 hours ago
[-]
It’s naive to think that regulation is going to cover the entire global internet.

If you regulated domestic companies out of existence, global options would pop up in their place. You could try to block them all in app stores but people would go to the web views.

reply
alkonaut
5 hours ago
[-]
I think that's still mostly fine. Youtube is already not an app but a web site (It has apps too but I think it's less app centric than e.g. instagram).

Obviously we need the ability to regulate also global options. Typically if these actors truly become big, then they have a presence in their "target" countries, such as ad sales.

reply
butlike
6 hours ago
[-]
Do it like a library. When a person walks into a library, they're presented with a short curated list of books suggested from the librarian. All visitors to the library see the same books. From there, the visitor can go about their business searching for what they want.

If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.

reply
denismi
6 hours ago
[-]
The first unwatched video from the user's followed/subscribed channels. Chronological, reverse chronological, sorted alphabetically, by the user's channel prioritisation, by likes, by views... whatever the user chooses. And then an end of feed.

For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.

reply
noprocrasted
6 hours ago
[-]
> what is the first video you show them

Whatever is latest posted across their followings/subscriptions?

reply
vasco
6 hours ago
[-]
If they just signed up they have no followings or subscriptions. So now what, you need to show accounts to follow first? Thats the same problem as deciding what the first video to show is. How do you decide who they should follow? Or the vision is that you can only have friends as if it's 2005 and you can't discover anything serendipitously?

I don't consume any content from my friends on something like tiktok where I'm interested in discovering people that have good content under topics I'm interested in. I don't know who those people are and I want to discover new ones that come up not just follow some already popular accounts.

reply
Gigachad
6 hours ago
[-]
Undoubtably the change needed here will introduce friction, will reduce viewing time, and society will be better off for it.

The whole idea here is to make content consumption more deliberate and mindful rather than just opening the app and veging out to an endless feed of slop.

reply
hug
6 hours ago
[-]
That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.

You can (and should) argue that such a simple algorithm doesn’t “count”, but fundamentally the exact wording of the grandparent post never works, legislatively.

Lawyers will lawyer.

reply
NekkoDroid
5 hours ago
[-]
> That’s also an algorithm. An unsophisticated one, but an algorithm nonetheless.

The problem always has been "(personalized) opaque algorithms". Time sorted by followers isn't really opaque, nor is "sorted by likes" or whatever. The problem is always pulling in parameters that a users either has no active control over or are so variable they effectively could be random.

reply
alkonaut
4 hours ago
[-]
Can everyone just please stop saying "well ackshually sorting is done with an algorithm" and just assume at least not-idiotic-intent here? No no one will ban "algorithms" or suggests anything of that kind. Yes it's a terrible name. Yes it will be hard to formulate what's allowed and what isn't. But a very simple litmus test is: what are the inputs to the algorithm?

users coarse geographic location? Fine

AI detected language of the content? Fine

global popularity of the video clip? fine

user's past behavior: number of videos with similar content they watched? Average number of seconds this particular user usually waits until scrolling further?

The pattern is obvious. Personalized algorithms is what's targeted. Let's keep the discussion intelligent.

reply
anjc
45 minutes ago
[-]
Your litmus test isn't correct and your assumption of personalisation isn't correct either. All of the criteria that you see as fine are controlled under the relevant legislation and are considered personalisation, requiring transparency etc.

Furthermore, bills have been brought to EU parliaments that have erroneously attempted to ban all forms of ranking, which would include even the most basic information retrieval algorithms. So it isn't obvious at all what is meant by 'algorithm'.

reply
xigoi
5 hours ago
[-]
It’s not about whether there is an algorithm, but whether it’s controlled by the user.
reply
mrighele
5 hours ago
[-]
> If the user can search like in Youtube then how do you rank the results? That's also an algorithm.

Any ordering is an algorithm technically, so yes just "banning algorithm" doesn't work.

A better alternative could be "the algorithm must be public and reproducible by the user".

"Sort the posts of the people I follow in chronological order" you're good

"Sort the posts by the output of a blackbox trained on user data" too bad you're a publisher and are responsible for what people post.

reply
alkonaut
4 hours ago
[-]
> Any ordering is an algorithm technically, so yes just "banning algorithm" doesn't wor

Algorithm in this context (and presumably in any proposed legal text) is about personalization and purpose.

No one worries about presenting content based on total popularity, coarse geography. user's browser language, or anything like that, regardless of whether the actual ranking algorithm (in the CS sense) is an algorithm. Yes it's a terrible name for what's being discussed, but let's not lose focus on the purpose because of that.

reply
threetonesun
5 hours ago
[-]
The internet solved the problem of millions of millions in it's implementation details, you share a URL. You follow people, they share URLs, it grows organically, same way every website worked pre... Instagram? I'm not sure who moved to the algorithmic feed first.
reply
graemep
5 hours ago
[-]
I would say, no *personalised* algorithms other than those based on deliberate user choices would solve the problem. So, what user chooses to follow, or the same for everyone in the country.
reply
irusensei
5 hours ago
[-]
Enumerate by creation time in descending order the unwatched videos posted from the accounts the user follows.

Like social media 1.0.

reply
Computer0
6 hours ago
[-]
I made a new YouTube account recently and my homepage was blank.

https://news.ycombinator.com/item?id=37053817

reply
walthamstow
5 hours ago
[-]
Same thing happens on LinkedIn if you don't follow anybody, the feed is just blank
reply
SirMaster
5 hours ago
[-]
Don't show any video. Make the user search for what they want to see.

Rank them by best keyword match from their search query, if match is equal, order them newest posted to oldest posted.

Done.

reply
mc32
5 hours ago
[-]
You know old reddit, Flickr, etc., had ways of presenting content based on different things besides impulsive engagement.
reply
thinkingtoilet
6 hours ago
[-]
It's very easy.

"So the user opens the app - what is the first video you show them?"

You don't. How about that?

reply
wyre
6 hours ago
[-]
These are multi-billion dollar companies.

Its okay if they have some hard problems to solve.

reply
SecretDreams
6 hours ago
[-]
Won't anyone think of the multibillion dollar technolords? They are people too!
reply
pessimizer
6 hours ago
[-]
This seems to be consciously dishonest. Show them "most recent" or "most upvoted" or "A to Z." Pretending like this is hard is bizarre. People have always selected sort and filter algorithms, until companies started taking them away.
reply
phtrivier
6 hours ago
[-]
Of course it's easy: such decisions were taken _before_ the feeds where algorithmically built.

You rely on unambigous, "physical" properties of the videos.

There is a physical property of all the videos: the time of publication.

There is a physical property of all the channels: did you subscribe to it, or not ?

So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.

Now, of course, a brand new user would have no subscription - you show them a search box.

But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)

I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.

There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.

Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.

Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.

If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.

Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]

That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.

I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)

I'm advocating for "accepting your responsibility as an editor".

[1] https://en.wikipedia.org/wiki/Frances_Haugen#October_5,_2021...

reply
bee_rider
6 hours ago
[-]
The conversation has iterated a couple times and one point that people (on this site at least) are stuck on is “well however you rank things—latest, most popular—you’ll need to use some kind of algorithm, maybe quicksort.” This isn’t what the general public or politicians mean when they say “an algorithm” but it does make something of a point, what exactly the general public and politicians mean when they say that… it’s a bit ambiguous.

I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.

reply
stingraycharles
7 hours ago
[-]
This is one of those things that don’t translate to legal reality very well, as then you have to define “what is an algorithm”.

Is adding advertisements an algorithm?

Is including likes an algorithm?

Is automatically starting the next video after a previous one has finished an algorithm?

Is infinite scroll an algorithm?

Etc

reply
andybak
7 hours ago
[-]
This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".

I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.

My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.

reply
stingraycharles
7 hours ago
[-]
Great example. These companies are already experts at circumventing taxes, what makes you think they can’t weasel their way around some arbitrary written law?

Just look at the malicious compliance that Apple and Google have around the App Store stuff, they’ll find a way to comply with the law and implement different addictive dark patterns.

I’m not saying that I disagree that these companies need to be regulated, I absolutely do. I just think it’s going to be a complicated process, and not “oh just ban everything that’s an algorithm”.

And I have absolutely 0 faith in companies like Meta willfully complying.

reply
soVeryTired
6 hours ago
[-]
I have a feeling taxes are possible to circumvent only because a government tends to have one arm that wants to collect taxes, and another that wants to reduce them to encourage certain outcomes (like having a business setting up shop within its borders).

The US may have this dual incentive structure since it wants to build its tech giants while limiting their control, but the EU doesn't. The arrival of a foreign tech social media giant might make the legislation a bit more palatable to pass.

It will undoubtedly be complex to regulate all dark patterns away. But there are a few obvious, easy wins. It'd be a shame to make perfect the enemy of good.

reply
bootsabota
6 hours ago
[-]
Yeah it’s a tough situation to figure out.

But here’s the real problem: people don’t care. And I say that as someone who hasn’t used social media since 2014.

My observation of people’s behavior indicates that when all is said and done, people don’t care—they would rather get the endorphins from posting, liking, following, etc.

But the solution is to allow people to control their own algorithm, and to have open source solutions where communities manage their own social network.

It’s not the algorithm that is the problem it is that people don’t have the choice to curate their own content.

reply
tsunamifury
5 hours ago
[-]
People don’t care because the broader reality offers no alternative today.

We have failed at creating a society with hope for the future.

This is like prohibition talk. It was all the avoid the fact that America in that period was a social hellscape.

reply
AndrewKemendo
6 hours ago
[-]
Regulated by who?

There’s no political organization (yes Mamdani actually out-raised cuomo so let that sink in) that isn’t being actively bribed

reply
bee_rider
5 hours ago
[-]
Although it should be noted that Mamdani’s average donation size skewed much smaller than Cuomo’s, so it is possible that Mamdani was “bribed” by the general public.
reply
kubb
7 hours ago
[-]
This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.

Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.

reply
biophysboy
6 hours ago
[-]
Albert Hirschman wrote a great book about the rhetoric people use to stifle policy proposals 35 years ago. “It’s futile; it won’t ever work” is one common argument. It’s not a meme so much as a cynical reflexive intuition
reply
AdamN
6 hours ago
[-]
One that's reinforced by those against whichever legislation or regulation is being proposed.
reply
naravara
5 hours ago
[-]
> This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.

No they’re engineers who think rules have to function as rigidly in every field as they do in programming.

They either can’t or don’t want to accept that the law is a social construct and what it actually means to you is determined by the weight of precedent, as applied by judges and regulatory bodies. Things are vaguely worded in the law all the time. If people want to dispute how the enforcement is done they sue and judge decides how the rule should be applied.

reply
owebmaster
6 hours ago
[-]
It probably comes from the same pockets that influences legislation
reply
SpicyLemonZest
5 hours ago
[-]
The point isn't that it can't be regulated. What the original comment said was

> This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present.

But this is not in fact easy. It's hard to define what "present data by algorithm" means in a coherent way, and it's hard to extend liability for the content you present to liability for the manner in which you present it. You could make it work, if for some reason you really wanted to, but it's easier to pursue the strategy described in the source article of regulating specific abusive patterns.

reply
specialist
2 hours ago
[-]
Who has agency in the relationship, server or client.

Said another way: push vs pull.

reply
throwawayffffas
6 hours ago
[-]
"By algorithm" can be easily defined.

The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:

1. In a chronological order, and only filtered based on user selected options.

2. In any other order explicitly selected by the user.

An exception can be made to allow filtering out content that violates the platforms terms and conditions.

Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.

reply
tmvphil
5 hours ago
[-]
If the user selects "sort by algorithm" then I don't see how you've changed anything other than the default. I think it's pretty obvious just changing the default won't work.
reply
throwawayffffas
1 hour ago
[-]
Sort by algorithm is not explicit, hence the explicit in my wording. They can have that option if they want but they must be liable for the content.
reply
xigoi
5 hours ago
[-]
Changing the default makes a huge difference because 99% of people don’t change settings.
reply
tmvphil
5 hours ago
[-]
That's because the default is 99% the way the app is designed to be used. If the default is regulated, then they will just say "sorry the default is boring, click here to bring back the feed" and everyone will just click.
reply
tzs
1 hour ago
[-]
New York did a pretty good job in their law that limits addictive feeds. Here's what their law says:

> "Addictive feed" shall mean a website, online service, online application, or mobile application, or a portion thereof, in which multiple pieces of media generated or shared by users of a website, online service, online application, or mobile application, either concurrently or sequentially, are recommended, selected, or prioritized for display to a user based, in whole or in part, on information associated with the user or the user's device, unless any of the following conditions are met, alone or in combination with one another:

> (a) the recommendation, prioritization, or selection is based on information that is not persistently associated with the user or user's device, and does not concern the user's previous interactions with media generated or shared by other users;

> (b) the recommendation, prioritization, or selection is based on user-selected privacy or accessibility settings, or technical information concerning the user's device;

> (c) the user expressly and unambiguously requested the specific media, media by the author, creator, or poster of media the user has subscribed to, or media shared by users to a page or group the user has subscribed to, provided that the media is not recommended, selected, or prioritized for display based, in whole or in part, on other information associated with the user or the user's device that is not otherwise permissible under this subdivision;

> (d) the user expressly and unambiguously requested that specific media, media by a specified author, creator, or poster of media the user has subscribed to, or media shared by users to a page or group the user has subscribed to pursuant to paragraph (c) of this subdivision, be blocked, prioritized or deprioritized for display, provided that the media is not recommended, selected, or prioritized for display based, in whole or in part, on other information associated with the user or the user's device that is not otherwise permissible under this subdivision;

> (e) the media are direct and private communications;

> (f) the media are recommended, selected, or prioritized only in response to a specific search inquiry by the user;

(> g) the media recommended, selected, or prioritized for display is exclusively next in a pre-existing sequence from the same author, creator, poster, or source; or

> (h) the recommendation, prioritization, or selection is necessary to comply with the provisions of this article and any regulations promulgated pursuant to this article.

reply
orbital-decay
7 hours ago
[-]
"Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
reply
itissid
5 hours ago
[-]
Instead, a regulation could mandate the administration an anonymized unbiased mandatory eval test at the end of every week/bi-week/month just like instruments for psych evaluation (e.g. do you feel your <mental-health-metric> has become worst in the last <time-period> on <scale>. Did you have <mental-health-marker> after watching content on social media?).

The said regulation can then mandate that after calibration and correction the feed pull back by training the algorithm to adjust it in a rapid A/B test.

This is all doable by the companies themselves, but since they wont, the key is to mandate it and publish the aggregate results regularly — like make it part of the quarterly share holder's SEC reporting requirement or something.

reply
itissid
5 hours ago
[-]
I would say its naive to regulate the algorithm than its effects. The effects are all that matter at the end.
reply
randunel
7 hours ago
[-]
Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.

The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.

reply
stingraycharles
7 hours ago
[-]
This effectively means “every online platform ever” and would also have included MySpace and the OG Yahoo etc, and as such would not really single out the truly bad actors.

And then we’ll end up with with another cookie-banner style law which had good intentions but actually missed the point entirely.

reply
bee_rider
7 hours ago
[-]
Maybe MySpace should be covered. I mean, MySpace probably(?) had the technical capacity to act maliciously in the manner that modern social media sites do, then business model just hadn’t evolved to the modern toxic state yet.

The cookie banner law is fine for the most part. Sites that do the malicious-compliance thing of over-prompting the user for permissions are providing a strong signal that they are bad actors. It’s about as much as we can expect without banning them entirely…

reply
randunel
7 hours ago
[-]
I stopped using facebook around 2015-ish, when they stopped allowing sorting by date. Prior to this, hi5 and the likes also allwoed sorting by date. So no, not every online platform ever.
reply
progval
7 hours ago
[-]
It even includes email providers with a spam filter.
reply
3form
7 hours ago
[-]
This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
reply
baggachipz
6 hours ago
[-]
Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
reply
pessimizer
6 hours ago
[-]
I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.

I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.

The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"

-----

edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.

So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.

reply
linkregister
36 minutes ago
[-]
Even easier solution: mandate that all harmful content be distributed with the evil bit set in all IP packets. Security: solved.
reply
vondur
56 minutes ago
[-]
This would be extremely hard to put into legislation that wouldn't affect just about every single site on the internet currently.
reply
shiandow
7 hours ago
[-]
And when does the user decide? Must a platform do nothing to stimmy spam, or even illegal content to qualify as impartial?

I suppose the answer could be that only platforms that do indeed allow spam or worse are impartial, but that is a tricky position to be in.

reply
leogiertz
7 hours ago
[-]
The mechanism would be that if the user has chosen to follow an account then posts from that account falls under common carrier. If the platform choses to show you other posts then it's under their responsibility.
reply
viktorcode
2 hours ago
[-]
Define "algorithm" then. I would argue that "sort by date, excluding stuff the user already watched" is already an algorithm.
reply
cmrdporcupine
2 hours ago
[-]
This is just pedantic. "Algorithm" is obviously shorthand for: a recommendations system that shows me things I didn't explicitly opt into.

Compare e.g. Mastodon vs Twitter or Bluesky. The former simply won't show you anything you didn't explicitly subscribe to, and there's no hidden ranking system.

The law is not a computer program. It is up to human interpretation. The law merely needs to define the intent, which is actually fairly easy to explain: you're not a common carrier if you're mediating and promoting and ranking and pushing beyond what the user has subscribed to with their choices.

You can get technical that "sorting" and "filtering" is a form of that, but you'd be applying the lens of a software engineer, not a lawyer.

reply
viktorcode
2 hours ago
[-]
It is pedantic, and you have to be pedantic when talking about laws and regulations. The vaguely written laws have the tendency to be interpreted in the most restrictive way possible by the executive branch.

> "Algorithm" is obviously shorthand for: a recommendations system that shows me things I didn't explicitly opt into.

In that interpretation that is applicable to any form of broadcast, including TV and radio, driven by the user ratings of their previous programs.

reply
wyre
1 hour ago
[-]
This is hacker news, not a law board. use good faith.
reply
cmrdporcupine
2 hours ago
[-]
You haven't seen any proposed law. Just someone's choice of word that got your back up.
reply
InsideOutSanta
4 hours ago
[-]
This exactly. I find it perplexing that social media companies get to make decisions about what people see but then also get to pretend that they are just a neutral communication medium. They're clearly not.
reply
luke5441
6 hours ago
[-]
This is a bit of systems difference. Under a french law system you would write laws to regulate the harms away. Under english common law liability court cases about the harm would lead to precedents and then to common law derived from it. Though not an expert on this.
reply
giancarlostoro
1 hour ago
[-]
People have argued that by censoring what users can say, these platforms made themselves editors, if its not flat out illegal, I don't see why anyone should waste any time trying to police the internet, its a fools errand. I've had Facebook's AI ding me for posting literal memes that out of context sound ridiculous.
reply
rwmj
6 hours ago
[-]
You'll need to solve the dark pattern where a new account opens on a blank page with a box saying "Would you like us to suggest what you watch here?"
reply
PokemonNoGo
6 hours ago
[-]
Why would anyone go to a new platform if they didn't know anyone to follow there? I don't see a problem there. I download TikTok and search for SexyDancingDinosaur I heard was on there and press follow.
reply
an0malous
6 hours ago
[-]
It’s so elegant that there’s zero chance the EU will do it since this is all performative for them
reply
akersten
6 hours ago
[-]
How does this specific horrible take rank so highly on HN whenever something adjacent to big tech gets posted. "impartial common carrier" is not even an extent legal concept.

It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.

reply
walthamstow
5 hours ago
[-]
Back in my day, they used to be called social networks
reply
grumbel
6 hours ago
[-]
Alternative suggestion: Force them to open up the service and allow third party clients. Take Art. 20 GDPR "Right to data portability" and extend it to public content.
reply
adverbly
3 hours ago
[-]
Do you really think a couple algorithm changes are all that's needed to make social media something into something that won't have a significant negative impact on the average child if they're exposed to it?
reply
2OEH8eoCRo0
2 hours ago
[-]
So...repeal Section 230? I like it!
reply
doctorpangloss
5 hours ago
[-]
"this" - you mean, engagement optimization? i think it would be different content. i don't know how much liability matters, people spend all day watching netflix too, and it is "liable."

ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.

reply
anzerarkin
7 hours ago
[-]
I don’t think this is only a kids issue.

A lot of adults need this too. The addictive apps are very well designed, while most blockers are either too easy to ignore or too annoying to keep using.

I built a small iOS blocker because I had the same problem. Making it strict enough to actually work without making people hate it is the main challenge.

reply
criddell
6 hours ago
[-]
On the radio I heard a reporter talking about things China does during school exams. Apparently all schools have exams at the same time and during that period, social media shuts down at night. I forget the exact hours (10pm - 6am maybe). I'm starting to think that would be a great policy in general for everybody.

I think they also said AI companies go offline during exam hours, but I may have got that wrong.

reply
Aurornis
6 hours ago
[-]
Absolutely wild that we’re seeing proposals to shut down parts of the internet and regulate when people can talk to each other on social platforms as a real suggestion on HN.

I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.

reply
afavour
5 hours ago
[-]
I think it speaks to the complete lack of government regulation in the area that people see such extreme answers as positive. If any government had seen fit to engage in light regulation of what social media can do people might be happier.
reply
Aurornis
4 hours ago
[-]
Evidence suggests the polar opposite.

Governments have been working on regulating platforms. Every time they get close, there’s outrage when people realize what it means for them.

Age regulations are the best example. Every time the topic comes up there is a lot of support for government regulation of social media by age.

Then every time there comes an actual attempt at government regulation or even self-regulation by the companies, everyone goes ballistic when they realize what that regulation means.

This topic is awash in ideas that regulation will come in like a scalpel that only touches something that won’t affect anything we like, only hurt some companies in some specific way that doesn’t take anything away from us. This notion doesn’t survive contact with reality.

That’s how we get these short sighted comments inviting the government to come shut down parts of the internet. I bet the person who asked for that assumed it would be perfectly targeted at sites they don’t need or use, leaving their version of the internet untouched. They never imagined the government might scope creep it to start shutting down communications they didn’t like.

reply
zombot
5 hours ago
[-]
Light regulation won't cut it any more for companies that are too big to jail.
reply
achenet
5 hours ago
[-]
You can always nationalize ;)
reply
pibaker
1 hour ago
[-]
It becomes even more wild when you put it next to the response to cloudflare getting blocked in certain European countries during sports matches.

People love to ask the big government daddy to step on them and when it actually happens they start wondering why would any sane person want that.

reply
schnitzelstoat
6 hours ago
[-]
I can only imagine these people have never experienced such censorship.

Maybe they'll feel differently when they have to upload their ID and face scan (which later gets leaked) just to be able to read a recipe for beer or whatever.

reply
tolerance
5 hours ago
[-]
> I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.

There's been criticism about the culture surrounding platforms like Mastodon/Bluesky that anticipated this.

reply
prmoustache
1 hour ago
[-]
> This is a totalitarian government play.

Putting China aside, and regardless about one's opinion on the aformentionned measure, I think you need first to learn the concept about totalitarian governement and representative democracy before trying to use those words because you clearly don't know what these are.

reply
freejazz
2 hours ago
[-]
As opposed to totalitarian tech overlords?
reply
zombot
5 hours ago
[-]
But it's kind of a logical, if misguided, consequence of regulators being completely corrupt and letting those feudal lords do whatever the hell they want.
reply
dgellow
6 hours ago
[-]
I can understand regulating dark/abusive patterns, but at the end of the day I should be allowed to doomscroll at night if I want to
reply
buellerbueller
6 hours ago
[-]
>I can understand regulating dark/abusive patterns, but at the end of the day I should be allowed to doomscroll at night if I [am an addict]
reply
butlike
6 hours ago
[-]
Toast notifications were the big mistake. Also badges. In my perfect world, the only thing to retain the ability to keep messages alerting the user that someone tried to contact them would be voicemail, subject to the same spam laws as everything else.
reply
kgwxd
7 hours ago
[-]
As an adult, who despises all those apps, I don't want to grant government the power to make that decision for me.
reply
criddell
6 hours ago
[-]
An an adult, do you also believe seat belt laws are a bad thing?
reply
pibaker
1 hour ago
[-]
Seatbelt laws have very limited impact outside vehicle safety. Nor does it open a slippery slope that leads to buses and trains and elevators and dining chairs and beds getting their own seatbelts.

Regulation on speech threatens the basis of democracy. The fact that the countries pushing them most successfully (UK, Australia) are also the ones with serious freedom of speech problems compared to their Western peers should also tell you that no, they will not stop at throwing you in jail for memes on twitter.

reply
joshlemer
4 hours ago
[-]
This argument is lacking nuance. Just because there are some instances of paternalism one is prepared to accept, doesn't mean that every possible paternalist policy is always okay. Being in favour of some instances but not all, is not a logical inconsistency. We can talk about each instance on a case by case basis.
reply
jayGlow
6 hours ago
[-]
personally yes, that kind of choice should belong to the individual not the government. besides that though the laws are nonsensical why is a seatbelt required in a car not not in a bus, why are motorcycles even allowed at all?
reply
moooo99
5 hours ago
[-]
This argument falls apart for countries with socialized healthcare.

As long as all people are paying for your dumb decisions, it is reasonable to expect the government to reduce the frequency of dumb decisions by adequate means.

reply
joshlemer
4 hours ago
[-]
I notice that these sorts of justifications for increased paternalism as a consequence of socialized services come up in public discourse all the time but never seems to be mentioned by advocates when proposing these socialized systems. It should be mentioned up front as a significant cost as part of the package, it comes with strings attached like the government telling you how to live your life. Interesting that people don't seem to want to mention that up front.
reply
nekusar
6 hours ago
[-]
Yes, I do. Its just another way that cops can pull you over for bullshit charges and revenue enhancement.

I remember in my state, it was initially only a citation that couldnt be pulled over on. Then they flipped that and started pulling over for it. Why? Pure fucking money grab.

Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.

reply
foobarian
6 hours ago
[-]
> Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.

Except who pays for your million-dollar reconstructive surgery and rehab? I don't suppose you will cover that out of pocket to avoid burdening your fellow insurance payers with your reckless behavior?

reply
wackget
5 hours ago
[-]
This cannot be a genuine take from a real person.
reply
aeve890
6 hours ago
[-]
>Me not wearing a seatbelt means I risk getting splattered. Not you, or anyone else.

Physics says otherwise. In a collision you don't decide where you body is yeeted and your skull could end inside the skull of a passenger using his seatbelt. Don't be a moron. https://youtube.com/shorts/n2yLMGA_YSA?si=AlvRgfpb-PJxGCBw

reply
chinathrow
6 hours ago
[-]
Is this satire?
reply
nekusar
4 hours ago
[-]
Is school busses without seatbelts satire?

Are motorcycles without seatbelts or harnesses satire?

All im saying is that an adult should be able to choose to wear a seatbelt or not in their own vehicle. And also, shouldnt get fined for choosing to not wear one.

BTW, i wear one when i drive or am a passenger. And if im driving, i ask everyone to wear one.

reply
chinathrow
4 hours ago
[-]
> All im saying is that an adult should be able to choose to wear a seatbelt or not in their own vehicle.

Millions of deaths and injuries avoided as a result of seatbelts obligatory in many countries around the world do not share your world view.

https://unece.org/sustainable-development/news/unece-celebra...

reply
nekusar
4 hours ago
[-]
So, you dont care about children on schoolbusses? None of them have seatbelts.

And motorcycles are explicitly allowed, and have no restraints or harnesses. Mopeds, same. Scooters, same. Bicycles, same.

Adults *should* have the right to do risky behaviors that increases the risk of bodily injury. But no matter the link you put forth, doesnt explain why fucking schoolbusses that transport years 6-18 dont have seatbelts.

reply
SlinkyOnStairs
3 hours ago
[-]
> I don't want to grant government the power to make that decision for me.

The alternative is letting multi-trillion dollar companies make those decisions for you, which they do with the explicit intent to manipulate you AND to push the politics of the currently sitting government of the United States.

Meta has repeatedly censored LGBT content, with no warning or stated policy change, since the government changed. All without the formal legislative process. Good chance the Trump admin didn't even ask for this, Meta just did it pre-emptively to suck up to them.

Opposing some basic restrictions on addictive and exploitative features and the requirement to offer users a standard reverse-chronological-followed feed without "The Algorithm", does not make you an Anti-Government Free Thinker. You're the exact kind of "sheep" Zuckerberg & the Trump administration want you to be.

reply
wackget
5 hours ago
[-]
You might have the self-awareness and impulse control to stop yourself from getting addicted to these apps, but the majority of the world's population does not.

These giant companies pour millions upon millions of dollars into engineering their services to be as "engaging" (read: addictive) as possible with the specific goal of making users spend more time on them.

Against that, the average person has no chance. The power balance is hugely uneven.

A responsible government which actually cares for its people has a duty to protect them from abuse like that.

reply
actionfromafar
7 hours ago
[-]
If we afford the same protections to adults, we don't need age verification either. Just a thought.
reply
Pesthuf
7 hours ago
[-]
Tell me: why are these algorithms suddenly okay when the victim turns 18?

They are bad for everyone and if you’re willing to regulate them, make them illegal to be used on anyone.

reply
nomel
46 minutes ago
[-]
Because I'm an adult, rather than a sheep for you to control, and I much prefer being shown things in my interest (via recommendation alg) rather than things in the general populations interest. In fact, I wish I could upload my own recommendation model to all the social media, to curate it better, and put some black holes near topics I want to avoid, with the option to fuzz it a bit for exploration.
reply
xboxnolifes
1 hour ago
[-]
The idea is that underdeveloped brains are less capable to make correct choices. Why do we suddenly make people responsible for their actions at 18?
reply
Mashimo
7 hours ago
[-]
Just from this article it's not clear if the methods like endless scrolling or "watch next video" are going to be regulated based on user age or not.

It just says the platform who use such methods, often target kids.

reply
palata
7 hours ago
[-]
Same as for the cigarette: it's a lot easier to regulate stuff for kids, because we as a society tend to agree that they need to be protected. Much harder to do with adults, because it is much less of a consensus.
reply
abyssin
2 hours ago
[-]
I genuinely wonder if there would be any end user against banning these algorithms. I’d love rss style content providers back. Just let me freely subscribe to accounts.
reply
poszlem
5 hours ago
[-]
Because, in general, we see adults making bad choices as a price worth paying in a free society, but we recognize that children lack the maturity and judgment to make those choices for themselves.

Most adults also lack the maturity and judgement, but allowing adults to make bad decisions is usually less dangerous than giving someone else the power to decide which decisions are too bad to permit.

reply
owenpalmer
2 hours ago
[-]
Smoking has entered the chat
reply
nadermx
1 hour ago
[-]
Why is it always people pushing government to replace parenting? My mother use to take away my computer if I didn't abide by her rules.
reply
fc417fc802
1 hour ago
[-]
People are pushing to regulate a product that's proven detrimental to wellbeing on a large scale. What does that have to do with parenting?
reply
GoatInGrey
1 hour ago
[-]
Some believe, though they hesitate to put it in writing where it could be openly challenged, that regulations are a dysfunctional aspect of governance and that solutions to systemic issues lie at the individual level.
reply
nadermx
1 hour ago
[-]
You must be joking right? You're basically saying what does parenting have to do with kids? Or how a parent could tell a kid not to install tiktok and follow through with consequences if they do? I'm confused.
reply
danlugo92
1 hour ago
[-]
It's not about children.
reply
rmnwski
2 hours ago
[-]
What I really enjoy about hn is the fact that it’s not infinite scrolling. I scan the front page, if something grabs my attention, I’ll look at. If something has a lot of comments, it seems important. When I’m done scanning/reading, I’m done. On social media, you’re never done.
reply
randoments
2 hours ago
[-]
thats why they invented /new and the refresh button.

Tbh, i have a procrastination problem and HN is as good as an excuse as any other social network.

reply
FinnKuhn
7 hours ago
[-]
I think especially restricting endless scrolling is a good thing overall to reduce the addictiveness of social media and its harmful effects.

HN having pages instead of a feed or endless list is one of the things I really like about it.

reply
nanapipirara
7 hours ago
[-]
For sure.

The other thing I really love about HN is that titles are all supposed to be boring and to the point. The guidelines[1] for titles are excellent and I wish more of the web and honestly legacy media too would behave that way. Things that are of no interest to me are not trying to waste my time and attention.

[1] https://news.ycombinator.com/newsguidelines.html

reply
dinfinity
4 hours ago
[-]
It's probably not something that can be enforced legally, but the concept of DeArrow (user-provided titles and thumbnails for Youtube) should be the norm.

Perhaps there can be an EU maintained browser extension that allows netizens to provide alternate, honest titles and thumbnails for all kinds of content. Probably still incredibly hard to implement throughout apps etc., but a man can dream.

reply
ekjhgkejhgk
7 hours ago
[-]
> I think especially restricting endless scrolling

The actual point is that they are designed to be addictive. "endless scrolling" is just an implementation detail. If you "ban endless scrolling", they'll still be using every other trick to make it addictive.

reply
jrflo
6 hours ago
[-]
I heard someone on a podcast call social media algorithms "the modern-day cigarette" and that really resonated with me. These companies know their product is addictive and bad for users, but they keep pushing it anyways. Like cigarettes, it's bad for everyone, not just kids. I made an algorithm blocker for Safari because of that and it's actually crazy how much more pleasant social media is if you don't have recommendation algorithms at all. I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids, but I understand why it's starting there...
reply
Aurornis
5 hours ago
[-]
If you didn’t notice, this comment is an ad for a paid app trying to capitalize on social media anger. I respect the hustle, but this is not a neutral comment on the topic due to the financial interest. There are many free alternative plugins for targeting social media feeds if someone wants to filter these.
reply
skrebbel
3 hours ago
[-]
I always liked about HN that we'd be OK with people plugging their products so long as it was on-topic and not all too shameless. After all, it's a site frequented by entrepreneurs, we all know how hard it is to get a product off the ground.

IMO this comment (also before the link was removed) fit that bill perfectly and I'd encourage the author to share the link anyway.

reply
jrflo
5 hours ago
[-]
I removed the link, just thought it was relevant to the discussion.
reply
tolerance
5 hours ago
[-]
I was going to make a similar accusation as the above but I skimmed your comments and it didn't seem like you were the sort to have ill intent behind bringing it up. Next time you might want to include one of those stuffy "Disclosure" notices.
reply
jrflo
5 hours ago
[-]
That's a good idea, thank you for the feedback. I have a hard time finding the line between "advertising" and "sharing something I built" on this site sometimes.
reply
notarobot123
5 hours ago
[-]
but you should link to it in your bio for those of us still curious to find out what it is you built.
reply
jrflo
4 hours ago
[-]
Thanks for asking, I put a link to my personal site in my bio if you want to find it, it's called Scrolless.
reply
b3lvedere
4 hours ago
[-]
Not really anyones fault. Thanks to endless enshittification, relentless advertising and the current 'AI' some people may have become to behave like this even if your intention was good.
reply
echelon
4 hours ago
[-]
You shouldn't remove the link.

Self-promotion is the anti-hyperscaler. I'd far prefer 100,000 hustlers to the hyperscalers forcing device attestation, 92% URL-bar-to-ad-funnel monopoly, "oops no more adblocker for you anymore", hardware we don't own, no direct app downloads, dumping on healthy markets with outside business unit profit to kill them off, continue growing like cancer, etc. etc.

Also, while on the point of this, I'm hoping fast on-device AI agents finally kill off advertising. I'm hoping my agent will stand between me and (advertising, toxicity, the algorithm, etc.) and literally rip the suit and pants right off Meta, Google, et al.

I want to put a de-Google/de-Meta agent on every device.

You want my eyeballs, you pay me.

reply
ethbr1
3 hours ago
[-]
> I'm hoping fast on-device AI agents finally kill off advertising. I'm hoping my agent will stand between me and (advertising, toxicity, the algorithm, etc.) and literally rip the suit and pants right off Meta, Google, et al.

How'd that work out with browsers?

Funding matters and working for the people pays less than working for businesses.

If we want a user -agent future, then we'd better figure out a financial model to incentivize that at scale. (Graphene or Mozilla subscription service?)

reply
glenstein
4 hours ago
[-]
I have no issue with people sharing their personal projects if they're relevant to the discussion.

If you've reading this and you have a personal project please proactively share it in the comments when it's relevant and on topic. I try to upvote and be supportive when I can to make sure they feel welcomed.

reply
herf
2 hours ago
[-]
Hang on, this is a $2.99 one-time payment - below the level you can even buy ads and make a profit. There is no way he's trying to make billions of dollars this way, and it's honest and smart. Consider the perspective - what happens with the "free alternatives"? You know they're not free: either they already track you, or someone buys them and turns them into spyware. We need more things like this, not less.
reply
cyanydeez
5 hours ago
[-]
If you haven't been on HN, you'd believe this was some aberration as opposed to the norm. This is a YC run forum, so it's pretty normal for comments to contain software advertisment based comments.
reply
econ
5 hours ago
[-]
I made a website 8 years ago for a business that was abandoned very early on by lack of time. Last week the first order came in.

Makes me curious, why should organic discovery not be a thing? How should it work?

I get that we don't want to look at promotional messages but do customers want to pay for advertising? I think many would be surprised how expensive it is to buy one customer. Some sectors more absurd than others.

I see lots of ads for things I know cost a tiny fraction of what is asked.

The idea everything is spam seems much to convenient for big business to be a coincidence.

I think we should go back to having a link to our website with each post. That actually makes it worth spending some time helping people.

reply
Aurornis
4 hours ago
[-]
It’s also the norm to call it out when someone isn’t disclosing their financial interest in something.
reply
wackget
5 hours ago
[-]
The modern-day cigarette is such a perfect metaphor for social media. A cabal of unfathomably wealthy companies spreading their harmful products across the world; making them as addictive as possible while actively burying the research which proves how harmful they are. I truly hope one day we'll look back on social media and smartphone use the same way we regard smoking.
reply
ErigmolCt
5 hours ago
[-]
I think the smoking comparison works best when applied to the engagement mechanics rather than "social media" as a whole
reply
jen20
4 hours ago
[-]
Why? Smoking was always pitched as a social activity.
reply
Lutger
3 hours ago
[-]
And we still let tobacco companies spread their products, which are practically speaking as harmful as they ever were, maybe even more so considering their environmental impact as well.
reply
SirMaster
5 hours ago
[-]
But if you stop using social media, you don't still have a risk of lung cancer in the future.

The effects of social media usage are surely reversible by stopping using it and then some retraining of the brain.

The effects of years of smoking are not so reversible in terms of what it does to your body.

reply
wussboy
5 hours ago
[-]
I hope you're right but I think you're dead wrong. Social media has not only affected the mental health of millions of people negatively, it has brought about social, political and economic harms that will affect the planet for generations.
reply
glenstein
4 hours ago
[-]
Right, the thing it reminds me of is the long-term impact of reading to your kids at a young age, it has measurable effects equivalent to expensive professional education choices you could make later on in life, although I forget the exact comparison.

But also it doesn't have to exactly reproduce the harms of smoking. It could be that the effects are primarily present tense and completely gone if you stop the habit, and nevertheless, amount to a cumulative social harm that makes it a worthy analogy to smoking. Social media also doesn't cause secondhand smoke or stained teeth, or unpleasurable odors on your person or home or furniture. It doesn't leave butts or debris on the ground. There's probably a lot more I'm not thinking of either, but you can see how nitpicky that starts to feel.

reply
SoftTalker
4 hours ago
[-]
Neuroplasticity wanes as people become adults. I'm not saying it's impossible, but changing ingrained patterns of thinking as an adult can be difficult or require deliberate effort and perhaps help of trained therapists.
reply
msabalau
3 hours ago
[-]
In the absence of any evidence, it is really unclear why anyone needs to catastrophize about generations of harms.

Is there any reason to believe that "social media existing" is a worse and more enduring harm than tens of millions of people dying in the Second World War, the trauma of the survivors, the vast destruction of infrastructure,or the start of the risk of nuclear war?

Yet the post war baby boom seems to have led a remarkably fortunate life, overall.

reply
somebehemoth
5 hours ago
[-]
> The effects of social media usage are surely reversible by stopping using it and then some retraining of the brain

This is a reasonable, but optimistic take. The effects of social media on developing brains will need to be studied to be sure the effects are reversible. Furthermore, how extensive is the damage and how long does it take to reverse? Are older people less likely to recover?

reply
bcrosby95
4 hours ago
[-]
The best thing a smoker can do is quit smoking. At any age. It's not just the long term risk, there's all sorts of short and medium term effects. I think the comparison is more apt than you're giving it credit for.
reply
biophysboy
5 hours ago
[-]
This is all true, but I think the main cost is the time wasted. The opportunity cost is enormous for humanity.
reply
jubilanti
3 hours ago
[-]
You have absolutely no evidence of these claims and are just rampantly speculating based on vague knowledge. You can't just shout "neuroplasticity!"

Cite sources or delete your comment.

reply
enedil
5 hours ago
[-]
Why are you so sure that the effects of social media are reversible?
reply
SirMaster
5 hours ago
[-]
Neuroplasticity. Seems better than the damage caused to your lungs and cells from smoking.

I mean, do you have any evidence that the brain is irreversibly damaged by social media? I have not seen any, but I have seen evidence that there is permanent cell damage from smoking.

reply
InsideOutSanta
4 hours ago
[-]
To play devil's advocate, there are good studies linking social media use to depression.

While you can somewhat mitigate the negative health effects of smoking by stopping and then making healthy decisions like doing sports and paying attention to what you eat, depression isn't something you can just stop having.

reply
SirMaster
4 hours ago
[-]
But are you saying that social media causes irreversible and permanent depression that neuroplasticity cannot ever reverse?

There is also a healthy side to social media, but not really a healthy side to smoking.

Social media helps me make and keep in touch with friends. I have not found any negatives personally. My feeds are pretty much just posts from friends. I have removed everything else by now.

reply
Teever
3 hours ago
[-]
This who conversation seems a bit simplistic and reductionist.

Sure the brain grows and changes but just pointing to 'neuroplasticity' -- a concept none of us really understand and saying 'it's all good' -- isn't that insightful because it's too one dimensional. At the end of the day we can say that this must have some permanent effect on the brain because people remember their time on social media, right? Yes, it's a mixed bag with some positives from social media but at the end of the day there's an opportunity cost for the time that they spent on social media in the form of times shared with loved ones, the formation of positive relationships in the real world, and perhaps career opportunities.

With that said the bigger issue to keep in mind is that the people who push this kind of technology on society do so knowing that it has negative consequences for individual users and society as a whole and yet they push it anyways for personal profit. And more than just pushing it they actively lobby the government to change laws or prevent regulations from being enacted that would stop them from doing so.

This is odious behaviour and it should be stopped and the people involved should face personal consequences for damaging society so casually.

reply
genghisjahn
4 hours ago
[-]
I'm not the first person to notice this but, since I switched to Pixelfed and Mastodon, I've found that I just don't spend as much time on social media as I used to. It's not that I don't follow good people, but without the algo burrowing into my lizard brain to get to keep me swiping, I just don't think about it. When I do remember to check them, it's always pleasant. I check a few posts out, look at an interesting link and 20 minutes later I'm back to the real world. That's great for me the user, but I doubt you can build an ad driven business off that. I wish I could say that I'm savy enough to not get sucked into swiping through scores of "funny" videos, but I give an hour to that crap, the hour is gone and I have nothing to show for it.
reply
p2detar
6 hours ago
[-]
Look up images in Google with `eu cigarettes boxes`. Banning is a thin wedge, but I think we need something like these warning labels for social media.
reply
wafflemaker
1 hour ago
[-]
Funny, just heard an interview where the guest said that nowadays more people feel bad about scrolling than about smoking cigarettes.
reply
kjeksfjes
5 hours ago
[-]
I agree with the cigarette analogy up to a point, but the UX consequences are easy to understate.

A lot of what makes these products feel “good” in the moment is exactly what regulators may end up targeting: no stopping points, instant continuation, algorithmic relevance, autoplay, low-friction notifications. If you remove or weaken those things, many users will probably experience the result as worse UX, even if the policy goal is reasonable.

So the hard part is not just “ban addictive design”. It is deciding which kinds of friction are legitimate product safety, and which ones become the digital equivalent of cookie banners: technically protective, but broadly annoying, ignored, and eventually hostile to normal use.

Starting with kids makes sense politically and morally. But if the regulatory logic is “this is bad for everyone, not jus minors”, then adult UX probably will get pulled into it too.

reply
trollbridge
4 hours ago
[-]
Like the tobacco industry, they have confidential memos about how to target children whilst simultaneously claiming they would never, ever target children.

And then pushed hard for legislation to make it someone else’s problem (like when the tobacco industry astroturfed for laws to make it illegal to sell under-18 cigarettes, after their own research showed it wouldn’t make much of a difference on youth smoking rates and would also improve their image as a “rebellious” thing to do). Sound familiar with Meta’s big push to have your OS declare how old you are?

reply
LPisGood
5 hours ago
[-]
This is still a recommendation algorithm, just less enjoyable/addicting one. Any process by which you decide what to show to a user is an algorithm .
reply
jrflo
5 hours ago
[-]
I see what you're saying, I should have been more specific. I more so mean recommendation algorithms that are artificially created by platforms to drive more traffic. I think the HN method of user votes without manipulation by the platform is better but not ideal, the best method is 100% user curated content (i.e. following specific accounts on instagram/twitter, RSS feeds, etc), which I would argue is not really a recommendation algorithm. I think that people don't realize how much the content they see influences your thoughts, and how much that content is chosen by profitability over anything else.
reply
mosquitobiten
4 hours ago
[-]
for me the problem is the amount of junk or seen stuff I have to filter to get to the genuine posts, it's like I'm going in the store to get a pack of cigs and I'm presented with an infinit amount of unkown brands, flavours and quality levels and I have no idea what to buy
reply
ErigmolCt
5 hours ago
[-]
The problem is when the product becomes an optimization machine for attention
reply
chaosharmonic
4 hours ago
[-]
I mean, I'd argue it's worse. Cigarettes don't run your communication networks, and aren't a functional necessity for businesses to advertise their services.

On that note, not that I think regulation is the entire solution in the first place (see ATProto for an example of something independent of government that gives me hope for the Internet), but I feel that where a lot of the "protect kids" Internet bills fail is that many of them treat that as a separate, special concern in a lot of areas where they could cover it anyway by just trying harder to protect users.

In the US, where I'm writing this, it's sort of like how our age discrimination laws are written just to protect elders, but didn't do anything to protect them from the lower floor that came from letting businesses keep spreading stereotypes about who the minimum wage is for or otherwise pushing hustle culture onto 20somethings.

The use of the Internet to astroturf political discourse is an example of this -- you can't fully protect kids from school shootings with an Internet safety bill if you're not also going after bot farms that exist to benefit the "thoughts and prayers" crowd. But you're also never going to see that in an Internet safety bill for kids, because that (and for that matter a lot of our discourse about addictive mechanics in general) explicitly leaves out voters.

(clarifying edit: I'm not saying there aren't valid concerns around this topic. I am saying that when we say things like "experimenting on users' mental health without their knowledge is bad," the baseline should be that you don't have to add anything to the sentence for it to be taken seriously.)

reply
Ylpertnodi
4 hours ago
[-]
> aren't a functional necessity for businesses to advertise their services.

Cigarettes don't collect and sell data.

reply
chaosharmonic
4 hours ago
[-]
Also true.
reply
CalRobert
4 hours ago
[-]
Agreed. Imagine if you had to open a pack of cigarettes every time you wanted to check the weather… then blamed people for being addicted to nicotine.
reply
yieldcrv
2 hours ago
[-]
I think this is even more applicable, because many people younger than me do regulate their social media use, taking "detoxes" or having a more limited use of it altogether, and they are more likely to have a social circle that reinforces that

It reminds me of how I have never been tempted to use a cigarette or any nicotine product and view them as nasty, while me being a little kid telling an addicted adult "you know, those are bad for you" was met with a shrug or I can quit any time, as their social circle and support system was based on using it

Makes me think my generation is cooked when it comes to social media use

reply
mock-possum
4 hours ago
[-]
Huh, and like the cigarette, even though I feel like I see what the appeal is supposed to be, I just cannot get over how gross it actually is to engage with, and feel like I’m already ‘over it’ and am just waiting for everyone else to figure it out.
reply
dylan604
3 hours ago
[-]
> I just cannot get over how gross it actually is to engage with

Cigarettes had the bonus of the infamous second hand smoke where it affected those not actively using the cigarette. It was the non-smokers that took action where we first started to see non-smoking sections in public places before eventually going completely non-smoking. There's really no equivalent for people using their devices in zombie mode in public places that affect those not using it to the equivalent public annoyance.

reply
pembrook
5 hours ago
[-]
Glad to hear a false comparison to something that's actually physically/chemically addictive really resonated with you (a.k.a. affirmed your already existing beliefs in this moral panic).

If we step back and look at this rationally though, can anybody point me to any peer reviewed studies (the actual studies, not clickbait articles written based off the studies) showing that social media is anywhere near as physically harmful or addictive as cigarettes?

I'm totally open to the idea that engagement algorithms are inflaming social division. I'm less convinced that the children are the ones being harmed however. I think its the adults who grew up in a media mono-culture where the default was trust are the ones more susceptible to negative outcomes.

When things change, the young are the ones more likely to adapt.

reply
nicolix
3 hours ago
[-]
reply
tim-projects
4 hours ago
[-]
One of the issues with social media is it's difficult to quantify the harm caused, and that holds true for any form of mental or emotional harm. One form that definitely can be quantified are the social media moderators who have to sit all day reviewing explicit and illegal content.

I also think you need to review what you consider the barrier to entry for harm. If you imply that there needs to be chemical or physical evidence - congrats you just threw out most harassment cases.

reply
pembrook
4 hours ago
[-]
Yes raising the barriers to what we consider harm is a good thing. Also, nobody is talking about harassment, that's already illegal.

In modern developed economies we don't have a problem with the barriers to harm being too low. We've got the opposite problem, where we've become deathly afraid of trivial imagined harm, resulting in us basically never doing anything and regulating new things out of existence (just look at the housing issue in cities in pretty much every developed country for example).

reply
micromacrofoot
3 hours ago
[-]
Facebook ran its own internal study that showed that Instagram was causing mental health issues in teens, and then tried to bury it.

https://www.theguardian.com/technology/2021/sep/14/facebook-...

reply
sofixa
4 hours ago
[-]
> If we step back and look at this rationally though, can anybody point me to any peer reviewed studies (the actual studies, not clickbait articles written based off the studies) showing that social media is anywhere near as physically harmful or addictive as cigarettes?

First thing that comes to mind, not exactly what you're asking for but still pretty clearly "physical harm": Facebook enabled the Rohingya genocide with their algorithm fueling the hatred's spread. They knew it's happening and ignored it. Yes, genocidal hatred can be spread via other means just as well (like radio in Germany, Rwanda), but that doesn't absolve Facebook from the blame, like you wouldn't be absolved if you started a radio station to spread hateful propaganda encouraging violence.

https://www.lemonde.fr/en/pixels/article/2022/09/29/rohingya...

https://www.asc.upenn.edu/research/centers/milton-wolf-semin...

https://www.amnesty.org/en/latest/news/2025/01/united-states...

reply
shevy-java
5 hours ago
[-]
That rationale never convinced me.

Smoking has definite physiological effects. Molecules bind to receptors or neurons and initiate cascades/responses.

I don't see this with user interface in a browser at all. IF you wish to reason for that, why are regular ads allowed? They piss me off. Why do I have to see them? They cause my brain an addiction to want to buy crappy products. So why is there no ban here?

Let's face it - the EU is on a path of "Minority Report" here.

> I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids

Yeah they try to restrict what we can do. We oldschool people call this fascism. See the EU trying to destroy VPN. And this is a meta-strategy we see here - many lobbyists are activated and try to "sync" laws that never made any sense to as many countries as possible. I see where corruption happens. And I don't buy the "we protect kids" fake lie for a moment.

reply
SiempreViernes
5 hours ago
[-]
Already Hippocrates was linking the mind to the physical brain, and if you've never felt a physical reaction from looking at the fairer sex I feel bad for you son, yet if you got ninety-nine problems at least sex ain't one.

It's just so tedious to see this "information cannot harm anyone" theory in a context where a huge fraction of the people spend their entire day jobs tying to make phishing less effective.

reply
sixo
5 hours ago
[-]
To hold this view you have to think of information as "not real", not like "real" molecules and receptors, the mind as distinct from the body, and then restrict the legal definition of harm to only "real" things.

This is an odd thing to do, because :

- information is real, it exists in the universe.

- the harm of social media is real, as measured by many of the same measures as the harm of smoking

Why not do something about ads? No, that's a good thought, we should do that too.

I think a decent conceptualization here is "psychic damage", as in a video game. These things deal a lot of it.

reply
akersten
5 hours ago
[-]
The other side of the view is "information is real and I don't like some of it ("it's harmful/addictive/blasphemous") so it must be controlled and regulated."

I don't think it's an odd thing to be opposed to that line of thinking.

reply
kyledrake
5 hours ago
[-]
People in here are casually linking social media to cigarettes, a product that kills half its users, and in previous iterations I've seen people compare social media to using heroin. It's completely hysterical.

I expect tabloid journalists and grandstanding politicians to do this, it really scares me when HN users that should know better do it.

reply
ambicapter
5 hours ago
[-]
This sounds like a "depression isn't real" and "if you're addicted, just stop" type of comment.
reply
kyledrake
5 hours ago
[-]
Depression is real, I'm experiencing it right now reading these comments.

You know what, why don't you go buy a carton of cigarettes and some heroin, and go use that for a few months. Since it's the same thing as looking at a news feed you shouldn't have to worry about addiction because you've already done that and not gotten addicted to it, so you should be fine, right?

reply
AshleyGrant
5 hours ago
[-]
> Depression is real, I'm experiencing it right now reading these comments.

No, you aren't. You are trivializing what Depression actually is by making flippant comments like that. You're also letting everyone know that you are utterly ignorant to what Depression actually is.

Do better.

reply
kyledrake
4 hours ago
[-]
You're mad at me for being flippant about depression while you're being flippant about social media, pot calling the kettle black. I'm not the one that started flattening every bad habit or unhealthy technology into "basically cigarettes and heroin".

If everything addictive is treated as morally and medically equivalent to hard opioid abuse or crippling nicotine addiction, then the language we use to talk about actual addiction stops meaning anything. There are people whose entire lives, bodies, families, and futures have been destroyed by heroin. Cigarettes killed my grandfather, I had to sit there and watch him die as a machine sucked black liquid out of his lungs.

Comparing that reality to doomscrolling on your couch cheapens the severity of one problem while oversimplifying the other. Internet overuse can certainly damage mental health. But pretending that checking TikTok is the same category of damage as opioid dependency is not awareness, it's insanity, and it's actively dangerous. It's also mostly happening because lawyers figured out this might be a way to sucker people into going around Section 230.

If people stop breathing the fumes of the vibes of this idea and start to process how any of this would actually work, they will eventually discover that they are proposing an internet police state where police with guns tell people what they can and can't do on internet forums. If you don't think that will slippery slope into something you don't want, please read more history. Government is fundamentally a dangerous monopoly on force and it needs to be treated with deep caution. People that want government regulated social media (remind me, who is the current US president?) so they can "own the Zuck" are playing an incredibly dangerous game, and I really hope they come to their senses soon, because the irreparable damage this idea will create will far outlive Facebook and Mark Zuckerberg.

reply
AshleyGrant
3 hours ago
[-]
> You're mad at me for being flippant about depression while you're being flippant about social media, pot calling the kettle black.

Where did I do that?

I stopped reading your comment after that since you accused me of saying stuff I never said, so whatever you wrote is unworthy of my attention.

reply
kyledrake
3 hours ago
[-]
I agree with you, please don't read any of this. Not wanting a government controlling online content (with police with guns!) on a false pretense that "Instagram is cigarettes" is apparently a fringe opinion now and I wouldn't want to damage your brain and make you addicted to my terrible posts.

To everyone else reading this: go back to when you were a teenager, and ask yourself how cool you would be with your government saying you can't look at web sites and forums because they're "too addictive", or you can't listen to Nine Inch Nails because it's "for adults only", or Geocities has to be shut down or you need to be carded to use it because it has "adult content" (it had a lot of adult content, including a robust LGBT community when it was very much not safe to host that content). How would you have felt about that?

reply
jrflo
5 hours ago
[-]
With some of the legal discovery happening at Facebook, we know that the company did internal research showing that it's products can be addicting and detrimental to kids: https://www.wsj.com/tech/personal-tech/facebook-knows-instag...

That's why I make the cigarette comparison. They know it's bad, but it's profitable for people to be addicted to it. I think it's bad for adults for a different reason, I've seen adults in my own life get influenced by things they see online (conspiracy theories, pseudo-science around health and nutrition, political radicalization). And this happens because it's profitable for people to be hooked on these topics with false or misleading information, not because it's true. That's not to say this never happened before recommendation algorithms, but it's a difference in magnitude. I think that's the reason we are seeing such a dramatic rise in political polarization- because it's profitable.

reply
afavour
5 hours ago
[-]
> Yeah they try to restrict what we can do. We oldschool people call this fascism.

Come on, this is an absurd statement. Governments regulate what people can do, yes. It’s part of their role. It’s why I can’t sell tainted meat on the street. It’s a good thing.

Of course there is a line you can cross where the control becomes excessive but “the government sets rules around what people can do, that’s fascism!” is absurd.

reply
achenet
5 hours ago
[-]
Yes.

Fascism isn't government making laws, fascism is "we're the superior race, kill anyone who disagrees".

I wouldn't call this move fascism, even if can be considered a bit heavy handed.

reply
dylan604
3 hours ago
[-]
No, fascism is a comedian is making jokes about me and my wife that I don't like and that comedian should be fired while the company that broadcasts his show should lose their government issued license.
reply
lp4v4n
6 hours ago
[-]
I don't agree with this. Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing. At some point, books, movies, comics, etc, etc might have been considered addictive.

Social networks in general should be banned for underage people, that's the thing. And the social network itself should be liable for verifying the age its users, like a nightclub is liable for people who enter it. No bullshit operating system age verification, that's, trust me, totally intended to protect kids and not to spy on you.

reply
bogwog
6 hours ago
[-]
> Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing.

What makes you say that? It's well known that the addictive patterns in these apps trigger dopamine the same way drugs do. In a sense, dopamine is the "chemical substance" central to the addiction. Heroine and algorithms are just different ways to get it.

https://med.stanford.edu/news/insights/2021/10/addictive-pot...

reply
Aurornis
6 hours ago
[-]
Everything you do “triggers dopamine”. Reading HN triggers dopamine. Eating breakfast triggers dopamine. Dopamine is also important for movement and many other things.

This is a lame reduction of brain chemistry that has been used to push agendas. Dopamine is not equivalent do addiction.

reply
nomel
20 minutes ago
[-]
> Dopamine is not equivalent do addiction.

What about porn/sex addiction, which is also dopamine? Gambling addiction? Also dopamine.

reply
bogwog
5 hours ago
[-]
> calls it a lame reduction of brain chemistry

> posts a lame reduction of the argument

reply
Aurornis
4 hours ago
[-]
Argument was literally that “social media triggers dopamine” which was supposed to imply something else.

Stop trying to say “dopamine” when you really mean to refer to a behavioral problem.

reply
SpicyLemonZest
5 hours ago
[-]
It's well known, but I'm not convinced it's true. Dopamine levels are measurable by blood test, and some drug abuse studies perform that measurement. Why does the literature on social media and dopamine exclusively talk in vague and general terms, rather than pointing to specific studies where researchers measured dopamine before and after 30 minutes of TikTok scrolling?
reply
butlike
6 hours ago
[-]
Addictiveness is measured by ∆FosB gene expression. The 'addictiveness' of a substance or activity is qualified by how much ∆FosB is expressed. It's decidedly not just a completely subjective thing. Books, movies, comics, etc. can all still be measured on this scale. Everything is addictive in some capacity, generally.
reply
bootsmann
6 hours ago
[-]
The reason why it is done this way is that “social media” is much harder to delineate and also not what is generally considered harmful.
reply
jampekka
6 hours ago
[-]
Addiction at least is quite straightforward to differentiate from otherwise engaging things by whether it causes significant harmful effects. E.g. per Wikipedia "Addiction is a neuropsychological disorder characterized by a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences."

Addictive would be then something that (for a substantial portion of population) has a tendency to cause addiction.

reply
simion314
6 hours ago
[-]
>At some point, books, movies, comics, etc, etc might have been considered addictive

The difference compared to a book is that a book is not personalized for each individual reader, so the example is not a good one IMHO.

reply
raydev
18 minutes ago
[-]
Hopefully there's a way to opt out of the inevitably worse experience for those of us who like scrolling through TikTok.

The web was already getting bad with ads and popups, but the EU's sloppy legislation loopholes somehow encouraged all websites to show cookie popups (and never remember my selection), so the experience is even worse.

reply
ge96
4 hours ago
[-]
I don't use those sites myself/not age group anyway. One thing Instagram is riddled with ads it immediately pisses me off so one reason I don't use it. I get the engagement aspect if you're trying to promote a product.

Recently I stopped scrolling reddit/being logged in and watching YouTube videos. Since it's just endless... and it's crazy. It feels unsettling, I'm bored/antsy since I'm not doing something. So I've been trying to use this time to not multi task eg. I used to have a YouTube video playing and then scroll reddit at the same time. Now it's like, listen to music/work on a computer or watch something only. Also starting to make me seek real socialization too like go to a coffee shop or something even though unfortunatley I live in the suburbs so I'd have to drive to a coffee shop.

The only stuff I look at too is this site or some entrepreneurial specific site for motivation.

reply
codybontecou
4 hours ago
[-]
Which entrepreneurial sites are you browsing? I feel like I'm at a similar stage in my day-to-day.
reply
ge96
4 hours ago
[-]
I used to read indiehackers but it's hard to tell what's real/promotional now, same for reddit sites that are entrepreneurial.

The one I look at though is wip.chat which is pretty much just a shared to-do list among builders, I'm not part of it, not gonna pay $20/mo or whatever lol, but good to see it.

I used to listen to digital nomad podcasts too as that was interesting to me like needing to make money/living day by day. Also similar not sure real or shilling, lot of Vietnam videos lately eg. "live here for $400/mo" kind of thing.

Was briefly watching starter story on YouTube but also not sure if real or shilling. It's funny I have the $10K/mo right now but it's because I have a job lmao... it sucks having a job, I'm not free.

reply
game_the0ry
24 minutes ago
[-]
The EU gave us annoying cookie pop ups. Now, they want to legislate design. Cool.
reply
yipbub
7 hours ago
[-]
Thanks, I'm an adult and I need it too
reply
mrosenbjerg
7 hours ago
[-]
Had the exact same thought
reply
butlike
6 hours ago
[-]
FWIW, social media use is mediated by ∆FosB expression, so the less you use social media, the less you want to use social media. Timeline of ~3 months.
reply
pllbnk
1 hour ago
[-]
I would like the focus to be on harmful content creators, not so much on the platforms. Platforms have incentives to bubble this content to the top because it's desirable but there is content that is simply illegal and it's being uploaded by same creators for years without any repercussions.
reply
raffael_de
3 hours ago
[-]
So many people here endorsing prohibition affecting everybody just because some folks aren't able to control themselves. I simply don't want the state to nanny me.
reply
antonyt
3 hours ago
[-]
I'm sympathetic to your point, but at some point other people ruining themselves degrades your experience as well.

I don't gamble and in theory don't care if others do. But prediction markets are affecting sports and possibly governance as well in obviously negative ways.

For this specific issue - the more functionally illiterate, zero-attention-span humans we create, the worse all of our lives will become.

reply
raffael_de
3 hours ago
[-]
there have always been losers, junkies and assholes. i'd rather have some of them but an entire society of adults brainwashed into wearing mental diapers.
reply
amarcheschi
3 hours ago
[-]
People are complaining about products engineered to be as addictive as possible, we're talking about something more than simple self control
reply
raffael_de
3 hours ago
[-]
everybody has a choice to just uninstall apps, leave the phone at home, switch it off at 8pm etc. that's what it means to be an adult. some people get addicted to work or sports - people even died from it - should we ban that, too?
reply
bognition
3 hours ago
[-]
> some folks aren't able to control themselves

I get that there are some people who have anomalous abilities to self control, and i understand that they might have a hard time empathizing with those of us that dont have that level of control. However to chalk up the solution as be a better person, when in reality corporations are spending billions on research and design of addictive products is just short sighted.

We saw the same thing with smoking. Plenty of people said "meh, why cant the smoker just quit, I did". Which missed the point. The tabacco companies knew if they could get kids smoking at a young age it was less likely they'd ever quit.

Its naive to assume that the social media companies are not doing the exact same thing.

Yes there is room for individual accountability, but also we need to be realistic about the amount of energy that is being spent subverting people's attention and self control.

reply
raffael_de
3 hours ago
[-]
you seem to believe that those companies hold some magical spell over people. that's exaggerating what is actually possible. all they can do is decide what you see on you feed and things like that. some people can't handle it. well, that is their loss ... it shouldn't be mine. and no, I couldn't care less if insta or tictac gets banned. the problem is that this is setting a precedent for a dystopic, nanny state where adults are treated like children. i don't want to live in such a society.
reply
deferredgrant
1 hour ago
[-]
The child-focused framing makes sense. Adults can argue about autonomy, but kids are not well equipped to bargain with recommendation systems.
reply
tolerance
6 hours ago
[-]
Either what defines an "adult" is going to be raised exponentially or what defines a "kid" is going to be lowered to determine who is allowed access to information in transit and who needs to be "safeguarded" from it.
reply
spockz
5 hours ago
[-]
They should also tackle Youtube. Yesterday my Youtube app opened into "Shorts" automatically. Shorts are of the same addictive variety as TikToks.
reply
joshlemer
4 hours ago
[-]
I feel like the core of the problem is that we don't as consumers have hardly any control over the software we run on our devices. People can and have written less addictive YouTube, Reddit, etc clients, but Google Play and Apple App Store don't allow them.
reply
awakeasleep
4 hours ago
[-]
You can actually disable shorts now. Open the youtube time management settings and set shorts to zero. Its bliss for an old guy like me
reply
qweiopqweiop
4 hours ago
[-]
Not on web (or the mobile web page either). It's a half baked solution right now.
reply
benhurmarcel
4 hours ago
[-]
If you disable your watch history, Youtube won't show you personalized Shorts anymore. Much less addictive.
reply
ErigmolCt
5 hours ago
[-]
Shorts are basically the same product
reply
ErigmolCt
5 hours ago
[-]
I'm usually skeptical of -protect the children- regulation, but addictive design feels like a real and concrete target
reply
hnthrowaway0315
7 hours ago
[-]
But they are so profitable, and we need them to track people around and create a police state efficiently. Ah let's keep them but just fine them as well for the show.
reply
boringg
6 hours ago
[-]
What else will fund the AI boom but computationally expensive video AI?
reply
garrettjoecox
7 hours ago
[-]
At what point should the responsibility fall on the parent to protect their children from harm?

Don’t get me wrong, if I had my way TikTok wouldn’t exist for anyone, adults included. It’s just so strange to me that so many parents hand their 7 year olds unrestricted access to TikTok and expect someone else to keep their kid safe.

reply
kioleanu
7 hours ago
[-]
I am from Eastern Europe and I’ve been living for many years in Western Europe. Where I come from, kids get their first phones when they start school at 6 (there’s a pre-school year) simply because every other kid has one. I keep coming back in my mind to two examples from my birth country: a friend’s kid carrying an 8 inch smartphone in his hand everywhere because the phone was as big as half his thigh and would have to carry a bag for it. The second one was on a visit at the zoo, I was on a bench and a family with two young children with them, in a cart. And both children, couldn’t have been older than 4 or 5, were scrolling TikTok, that was showing them children content!

In contrast, in Western Europe, my son is now in the sixth grade, more than half his class doesn’t have phones, phones are absolutely forbidden on school grounds and at school activities, and they are now doing a class trip where they were told that there’s a pay phone at the hotel, in case they want to call the parents - our son promptly informed us that he’ll rather buy a pack of Pokémon cards than call us and 3 days is not so much anyway.

And it is not only at school, he travels for tournaments with his team every other week and mobile phones are absolutely forbidden on the team bus. Children read, play games (including chess on a magnetic board), sing and change stories for hours at a time

reply
perarneng
7 hours ago
[-]
It's not so easy, they need phones and social media to communicate with their friends. They also need to fit in and find an identity. The algorithms basically all engagement engines have is harmful for humanity as a whole. They are marketed as recommendation engines but it's 100% about engagement and that is why the content you see is mostly creating dopamine from it being fun or rage for it being provocative. It's built to serve one purpose, to keep people using the platform as much as possible. Not because the platform is good, but because it serves content that maximizes engagement.

I read a post about someone saying his wife worked for a snack company. They used MRI scans to see how much salt (or sugar) they should have in the snacks to maximize the response in the brain. Sounds disturbing right.

Well engagement engines are the same thing. It's artificial intelligence optimized to get people to react and stay addicted. Basically AI doing harm. It's not what is best for the individual in terms of health. It's what generates most money to the owner of the platform.

It should not be allowed to build a business around something that exploits humans brains. Basically biohacking our brains for profit.

reply
tolerance
7 hours ago
[-]
Apparently parents are spending more time with their children than ever. Dads especially. Paradoxically, there is what you're addressing.

Personally, I think some parents are afraid of their children growing to resent them for infringing upon their "freedom" in ways that keeping them away from the dangers that social media and other technologies present.

reply
Mashimo
7 hours ago
[-]
> the responsibility of a parent to protect their children from harm

I agree with you, but only in theory. Because that's where we are now and it does not seem to work that well.

Maybe through more education? But then again I think reducing addictive tactics like endless scrolling could be part of a 2 prong attack.

With alcohol we have education on what happens, but we also have laws that regulate it.

reply
bogwog
6 hours ago
[-]
Replace TikTok with cigarettes, and it'll hopefully make sense to you. There was a time when people had no idea that smoking was bad for you, which is where we are now with these apps.

And since they're addictive, kids will find a way to get them even if their parents don't allow it. That's why it's most effective to require ID when you're buying cigarettes than it is to shame people for not being perfectly vigilant parents.

BTW, I'm not saying age verification is the solution here. IMO, we should instead ban addictive social media completely. Eg, target specific design patterns/features, require companies to disclose how their algorithms work to regulators, etc.

reply
kubb
7 hours ago
[-]
When it works.
reply
data-ottawa
1 hour ago
[-]
I'm all for regulating dark patterns, but I don't think it's easy to say what's addictive design or not. We have no objective basis to measure this with.

You could consider unlimited scroll a dark pattern and block that, just like you'd want to block making it near impossible to cancel a service. That's testable and clear.

But "addictiveness" is just too wide a net and unenforceable (or rather, selectively enforceable).

And then the idea that the EU claims their own age verification app is the most private and most secure app in the world, yet it's already been hacked is laughable[1].

I'm very exhausted at out-of-touch and technologically illiterate politicians declaring mission accomplished on these things.

I understand that the free venture capital backed market has failed humanity in many respects. If we want solutions to the _human_ problems technology causes then yes, we need some regulations. But once again, this is just sloppy, lazy, bad law. Not based on evidence, not based on any standards, and not clear at all.

[1] https://www.politico.eu/article/eu-brussels-launched-age-che...

reply
xgkickt
3 hours ago
[-]
Why do they do it? To make money. How do they make money? Advertising. Designate them broadcasters and hold them to the same advertising standards.
reply
kwanbix
4 hours ago
[-]
Targeting kids is good. But what about adults?
reply
seydor
6 hours ago
[-]
they are going to put kids on a drip basis. addiction is still there, just limited amount per session. Intermittent rewards is actually the perfect schedule for an advertising company, you don't want people to be making unmonetizable page views.
reply
epolanski
6 hours ago
[-]
Never understood the kids focus, looks to me like 50+ are by far the most addicted.

Which makes it also a matter of also parents and grandparents setting bad examples.

reply
bschwarz
7 hours ago
[-]
Imagine the pressure on Instagram and Tiktok to serve better content if they were forced to pick out, say, 100 short videos per person per day. And not just for kids, adults need a break from this addiction machine as well.
reply
christkv
4 hours ago
[-]
what about youtube shorts?
reply
thedetailsguy
6 hours ago
[-]
Isn’t it more of “emotional” design than “addictive” design?
reply
nirui
6 hours ago
[-]
You know, yeah, you can crack down "addictive design", but then what?

If you don't provide better alternative, the "kids" (and please, stop using "kids" as excuse because everybody can see through it now) will just stick on these platforms because, believe or not, these platforms are much MUCH safer than the alternatives.

How about, let's see the real problem here: 24% of EU children at poverty risk or social exclusion (2024), see https://ec.europa.eu/eurostat/web/products-eurostat-news/w/d.... That's not just a statistic about children, it's also about their parents.

Do you know that if you go outside, then there's this huge risk of having to PAY for stuff you don't actually need to live? Like transportation to go to place that don't bring you wealth, like drink that you drink even you're not that thirsty, like movie tickets just so it will not be too awkward after all the dialogue options are exhausted? Does these politicians just somehow forgot all of these costs money, in this economy that they helped to create?

And that is not to mention the REAL risk, such as drugs the bad ones, rude or crazy drivers, unpleasant adults who's only life purpose is to earn enough money to keep them going a little bit longer, just to name a few.

..... ORRRR, you can just stay in your conformable home, sit on your soft and warm sofa/couch, and swipe your life away on TikTok or Instagram for free, safely.

You see the problem here?

I'm really sick and tired of these politicians putting up this act pretending to "love children", when in the reality what they do is putting up easy patches to hide the real problem, which is poverty and inequality, that's the real problem.

reply
caaqil
6 hours ago
[-]
In the modern world: any tech proposition that starts with protection of children as a goal can be dismissed out of hand, since it's emotional manipulation masquerading as tech policy. When I hear "protect kids", all I see is a sleazy politician bowing to their respective Security State apparatus.
reply
shevy-java
6 hours ago
[-]
I do not buy this "holy knight war" by the EU at all.

It also makes no real sense to me.

Nothing against US mega-corporations paying fins, mind you, but I equally do not trust the EU bureaucrats either. There has to be a limit to both what politicians can do, what corporations can do and what bureaucrats can do, while retaining a democratic base system at all times. If you go against addictive design, then why not against ALL ads? I don't want to see any ads. Ublock origin made me change my mind here - I literally see no reason as to why I would ever want to burden my brain cells with irrelevant content.

This is a bit different to website layout though. I equally fail to see why the EU should meta-regulate what is permissive in regards to design and what is not. Why would I have to accept any random EU bureaucrat here? If a user interface sucks, I'd rather expect ublock origin to kill it off. This could also be community maintained. No need for the EU to waste taxpayers money. After the EU wants to sniff for age data and then also declared its holy war against VPNs, I do not trust anything coming from Brussels. Even less so with Ms. Leyen in charge - can't the anti-corruption offices in Germany get rid of such lobbyists?

reply
qweiopqweiop
4 hours ago
[-]
Weird analogy when ads aren't addictive in the same way. Look at people around and about and they're all like mindless zombies flicking through Tiktok, not looking at ads. IMO this doesn't go far enough.
reply
nalekberov
7 hours ago
[-]
Why, it’s always okay to harm adults?

Like adults spending their hours scrolling through infinite feed is somehow beneficial to the society?

reply
1vuio0pswjnm7
5 hours ago
[-]
Imagine if Big Tobacco had something like Section 230
reply
doublerabbit
4 hours ago
[-]
10 year's too late. Did they really not see this coming or did they turn a blind eye again? I am fed up of this corrupted world.
reply
NickC25
5 hours ago
[-]
Just do what China does, how fucking hard is it? They have 4x, almost 5x the population of the US.

STEM or verifiable educational content only. Have a review team and an AI that moderates content. No politics, no stupid dances, no monetization of content, no slop, and only credentialed people can post on certain topics (ie a delivery driver shouldn't make posts on theoretical mathematics).

reply
tt24
4 hours ago
[-]
What you described sounds like an authoritarian hellscape and the suggestion would be met with laughter in any civilized country.
reply
NickC25
3 hours ago
[-]
It's what happens in China with Tiktok. The Party said Tiktok needed to have educational content instead of stupid content. Tiktok complied.

The CCP may be many bad things, but they don't fuck around when it comes to education and making sure their youth aren't wasting their youth on stupid dance videos.

reply
tt24
3 hours ago
[-]
I am aware that that’s what happens. That’s what I’m criticizing.
reply
claudiug
3 hours ago
[-]
how about the shareholders? /s
reply
sylware
8 hours ago
[-]
Yeah yeah, virtue signaling, and most of EU online services are now gated by the use of one of the whatng cartel web engines (IRL, google blink), namely EU web sites are broken favoring web apps.

They have to restore interop with noscript/basic html web engines (past/present/and future).

Then, they have to be carefull with their file formats, for instance you never give "carte blanche" to such a disgusting format like PDF, you are very careful at defining a, as simple a possible, subset of it (with some internal software for validation).

reply
Mashimo
7 hours ago
[-]
Is ending endless scrolling really virtue signaling? Don't you think it will have a measurable effect?
reply
nanapipirara
8 hours ago
[-]
Yeah yeah, whataboutism.

I'm very happy they're taking a stance. I've seen too many messed up kids and there's no doubt the addictive design plays a big role in the problem.

reply
soco
7 hours ago
[-]
I must notice that every time, but really every time, EU moves a pinky finger against tech industry, a sizeable chunk of comments here will be like the one above. I wonder, is it about a general sentiment against EU? Or a general sentiment against restricting technology? Or a general sentiment against humans? Or what?
reply
palata
6 hours ago
[-]
I think it's easier and safer to complain about everything than to actually have a nuanced and informed stance.

Look at age verification: it's very easy and very safe to say "nobody sane would think that it is a good idea to force people to show their ID to every website they want to access, it will obviously leak the IDs, that is very bad!". While it is not wrong, it is manipulative: that is not the only way to implement age verification. In fact, there is technology that exists that would allow age verification in a privacy-preserving manner: some service that already have access to your ID can give you a token that proves your age, and you can then use this token to access a website. The service cannot know where you use the token, the website cannot know your ID, and they cannot collude.

So the constructive debate around age verification is this: assuming we implement it properly (i.e. in a privacy-preserving manner), is that something that we want or not? Does it solve a problem, or at least does it help?

But we cannot ever elevate the debate to that level, because nobody can't be arsed to get informed about it.

reply
eowln
7 hours ago
[-]
The sentiment that having to present our ID to use tiktok gives us the heebie-jeebies, and for good reason.

Also, nobody voted for the Commission.

reply
palata
7 hours ago
[-]
> The sentiment that having to present our ID to use tiktok gives us the heebie-jeebies, and for good reason.

So push for privacy-preserving age verification, such that you don't need to leak your ID to anyone but TikTok can still prevent kids from accessing it?

reply
eowln
7 hours ago
[-]
>privacy-preserving age verification

No such thing.

reply
palata
6 hours ago
[-]
That's my problem with the debate: people like you seem very proud to be uninformed. It exists as much as end-to-end encryption exists. It's cryptography, it's not up to debate.

But people who have no clue are very vocal about their belief that it does not exist.

reply
NewsaHackO
2 hours ago
[-]
I also am not aware of any, do you have any examples?
reply
eowln
6 hours ago
[-]
There are no active implementations that do not suffer from severe issues.
reply
soco
6 hours ago
[-]
There are two ongoing implementations: one weaker in the EU boooo and one good in Switzerland. None having severe issues. Questions?
reply
eowln
5 hours ago
[-]
I have none, I already am aware of the functioning of the systems that have been deployed, hence my statements of fact.
reply
palata
5 hours ago
[-]
You obviously have no idea, and this really looks like a new account made for trolling (negative karma).
reply
ToucanLoucan
7 hours ago
[-]
Boiling kid's (and adult's) brains probably makes them a decent chunk of money, either directly via salary or indirectly via stocks. Ensuring kids remain healthy makes no money. An unfortunately large slice of the tech sector doesn't give the tiniest shit about the health of our broader society or any group in it if it means their lines stop going up, or even go up slightly less fast.
reply
watwut
7 hours ago
[-]
Imo, both. The more right wing people started to have aggressively anti-EU stance once Vance openly stood on the side of Orban and against EU and democracies in general.

And some people see tech companies as worship worthy and trying to restrict them is kind of a blasphemy.

reply
modo_mario
6 hours ago
[-]
The Vance thing is far too recent and inconsequential across europe?

The sentiment precedes all that and mostly stems from the EU being in some ways originally lib left dominated and still being seen as facilitating non-eu migration

Regular right wing people (aka not one of the many parties potentially receiving donations) don't tend to love giant webtech companies. Especially since they feel like they're often used as a tool against them and aren't a local thing that draws nationalists either.

A focus on privacy also isn't a very left-right defined thing tho i have noticed that the most far reaching expressions of it come a bit more from the further ends of that spectrum. (you'll see some very left leaning people at fosdems privacy focused/related stands for example)

reply
dgellow
6 hours ago
[-]
> don't tend to love giant webtech companie

That’s a bit outdated since musk bought twitter

reply
thiago_fm
7 hours ago
[-]
Why should only kids be protected from addiction?

I have a hard time understanding this.

We have plenty of adults with terrible social media addiction that is destroying their lives, and nothing being done about it.

reply
mrintegrity
4 hours ago
[-]
My girlfriend's mother is visiting, 60 years old and constantly looking at instagram. I was sharing my data plan so she could use her phone outside the house but started lying that my plan had run out. Gives "parental controls" a whole new meaning!
reply
gib444
7 hours ago
[-]
Makes it an easier sell politically. If you position it as dangerous to kids in particular, your opposition then looks like they're encouraging child harm.
reply
palata
7 hours ago
[-]
Well and if you tell adults that they need to be regulated, they get pissed very, very quickly.
reply
indymike
6 hours ago
[-]
This is the best question of all. Why are we allowing this?
reply
evanjrowley
7 hours ago
[-]
The most on-brand solution for the EU would be to require mobile phone users to upload brain scans in real-time so the state can check for neural activity associated with addiction.
reply
buellerbueller
6 hours ago
[-]
The most on brand solution for a kneejerk reactionary American would be to satirize the EU for its consumer protections.
reply
irusensei
5 hours ago
[-]
Pretty sure candy colored EU paradise even today is discussing on breaking encryption and privacy for its citizens.

I'm posting from the EU.

reply
buellerbueller
4 hours ago
[-]
-In the EU it is a political discussion.

-In the USA, the government buys the data legally from the parties who are allowed to collect it and there are no political discussions.

One of these is obviously better than the other.

reply
irusensei
3 hours ago
[-]
> "In the EU, there is a a political discussion about outlawing privacy."

> "In the US, the government is using illegal methods to violate rights granted by the constitution."

Agreed one is obviously better than the other. The EU treats privacy as a privilege granted by the government while the US treats it as a fundamental right.

I'm not a fan of the US gov but I also don't agree with the candy colored view of the EU as an institution that does no wrong and when it does its a "well intended but misguided".

reply
buellerbueller
13 minutes ago
[-]
>Agreed one is obviously better than the other. The EU treats privacy as a privilege granted by the government while the US treats it as a fundamental right.

Point me to where privacy is considered a fundamental right.

reply
euejcidnf
4 hours ago
[-]
And who will crackdown on the EU making Europe a little bit shittier every day? The only smart thing they have done is to not ruin all the old super beautiful historical city centers but even that is for tax money, not for improving the locals lives. Their new hobby is to tax Chinese and American companies (and of course their own citizens even more in the coming years) but that won't bring back prosperity. It's all gone, Europe.
reply
Ylpertnodi
4 hours ago
[-]
Which bit of Europe are you talking about? Bari, or Utrecht?
reply
hamasho
4 hours ago
[-]
Check this English sentence When I started working as a programmer, my company held a book club, and one of the books was about the effectiveness of AB tests. We read how Amazon changing the color of the purchase button drastically increased the sales or some shit like that. Everyone was excited. The executives, the marketing team, and the dev team praised how clever it was, and insisted we should do the same.

It took me like five years to realize it was really not good idea for a small B2B business to spend a part of the limited resources in that. I needed several experiences to understand that in many cases good customer relationship and reliable system is much more important. But it wasn’t until recently that I started thinking like “wait, if it tricks the users into doing something they’re unwilling to do, isn’t it unethical?”

It makes me wonder how little we think about the ethics and the consequences of our investment. It’s not like we understand it’s unethical but do it anyway for profit. We simply don’t care how unethical it can be, not even slightly, until the evidence of the harmful effects are not negligible after years and decades.

reply