This isn’t inherent to social networks though. It is a choice by the biggest social media companies to make society worse in order to increase profits. Just give us a chronological feed of the people/topics we proactively choose to follow and much of this harm would go away. Social media and the world were better places before algorithmic feeds took over everything.
going beyond social media it's IMHO the side effect of a initially innocent looking but dangerous and toxic monetization model which we find today not just in social media but even more so in news, apps and most digital markets
soso, it's mostly that freely accessible channels need their content to be in a certain ~PG/age protection range (and in many countries that also changes depending on the time of the day, not sure about the US)
beyond that the constitution disallows any further regulation of actual content
through that doesn't mean that they can't apply subtle pressure indirectly.
Is that legal? no.
Anyway done for years? yes.
But mostly subtle not forced, i.e. let's say you give "suggestions" not required changes.
Except in recent years it has become a lot less subtle and much more forced. Not just giving non binding "suggestions" but also harass media outlets in other seemingly unrelated ways if they don't follow your "suggestions".
PS: Like seriously it often looks like the US doesn't really understand what free speech is about (as in some of the more important points are freedom of journalism, teaching and also showing your opinions through demonstrations and similar.). And why many historians find it good but suboptimal and why e.g. the approach to free speech was revisited when drafting the west German constitution instead of just more or less copying the US constitution (the US but also France, UK had some say in the drafting of it, it was originally meant to be temporary until reunification, but in the end was mostly kept verbatim during unification as it worked out quite well).
In the US there is free speech protecting the ability of people to say what they want.
Public TV has limitations on broadcast of certain material like pornography, obviously, but the government can’t come in and “control” the opinions of journalists and newscasters.
The current US admin has tried to put pressure on broadcasters it disagrees with and it’s definitely not a good thing.
You really do not want to encourage governments to “control” what topics cannot be discussed or what speech is regulated. Sooner or later the government will use that against someone you agree with for their own power.
https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...
More like the exposure of institutions. It’s not like they were more noble previously, their failings were just less widely understood. How much of America knew about Tuskegee before the internet? Or the time National Geographic told us all about the Archaeoraptor ignoring prior warnings?
The above view is also wildly myopic. You thought modern society overcame populist ideas, extreme ideas, and social revolution being very popular historically? Human nature does not change.
Another thing that doesn’t change? There are always, as evidenced by your own comment, always people saying the system wasn’t responsible, it’s external forces harming the system. The system is immaculate, the proletariat are stupid. The monarchy didn’t cause the revolution, ignorant ideologues did. In any other context, that’s called black and white thinking.
https://en.wikipedia.org/wiki/Unethical_human_experimentatio...
I never understood why this doesn't alarm more people on a deep level.
Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?
Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.
One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.
0: https://www.npr.org/sections/alltechconsidered/2014/06/30/32...
Not sure what public infrastructure has to do with it. Access to public infrastructure doesn't confer the right to regulate anything beyond how the public infrastructure is used. And in the case of Meta, the internet infrastructure they rely on is overwhelmingly private anyway.
Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.
It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.
You can go for the throat and say that social media companies can't own an advertising platform either.
Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.
I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.
And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.
I don't know why people are being overly reactive to the comment.
Research means different things to different people. For me, research means "published in academic journals". He is merely trying to get everyone on the same page before a conversation ensues.
These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.
If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.
If we don’t ask these questions, we can never move past hand wringing.
If they are going to publish in academic journals, they will have to answer to those bodies. Whether those bodies have any teeth is a whole other matter.
These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.
Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "
I meant, I no longer know who to trust. It feels like the only solution is to go live in a forest, and disconnect from everything.
Also feel you wrt living in a forest and leaving this all behind.
Because that's where people with that expertise work.
This comes up somewhat frequently in discussions of pet food. Most of the companies doing research into pet food - e.g. doing feeding studies, nutritional analysis, etc - are the manufacturers of those foods. This isn't because there's some dark conspiracy of pet food companies to suppress independent research; it's simply because no one else is funding research in the field.
Whole industries are paid for decades, the hope are the independent journalists with no ties to anybody but the public they wanna reach.
Find one independent journalist on YT with lots of information and sources for them, and you will noticed how we have been living in a lie.
Academia is basically a reputation laundering industry. If the cigarette people said smokes good or the oil people you'd never believe them. But they and their competitors fund labs at universities, and sure those universities may publish stuff they don't like from time to time, but overall things are gonna trend toward "not harmful to benefactors". And then what gets published gets used as the basis for decisions on how to direct your tax dollars, deploy state violence for or against certain things, etc, etc. And of course (some of) the academics want to do research that drives humanity forward or whatever, but they're basically stuck selling their labor to (after several layers in between) the donors for decades in order to eek out a little bit of what they want.
It's not just "how the sausage is made" that's the problem. It's who you're sourcing the ingredients for, who you're paying off for the permit to run the factory, who's supplying you labor. You can't fix this with minor process adjustments.
On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.
And yet one person kills a CEO, and they're a terrorist.
In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.
We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.
I'm talking about intentional actions that lead to deaths. E.g. [1] and [2], but there are numerous such examples. There is no plausible defense for this. It is pure evil.
But that doesn't seem to be true at all. He just had a whole lot of righteous anger, I guess. Gotta be careful with that stuff.
There is a great deal of injustice in the world. Psychologically healthy adults have learned to add a reflection step between anger and action.
By all evidence, Luigi is a smart guy. So one can only speculate on his psychological health, or whether he believed that there was an effective response which included murdering an abstract impersonal enemy.
I'm stumped, honestly. The simplest explanations are mental illness, or a hero complex (but I repeat myself). Maybe we'll learn someday.
"Former UnitedHealth CEO Andrew Witty published an op-ed in The New York Times shortly after the killing, expressing sympathy with public frustrations over the “flawed” healthcare system. The CEO of another insurer called on the industry to rebuild trust with the wider public, writing: “We are sorry, and we can and will be better.”
Mr. Thompson’s death also forced a public reckoning over prior authorization. In June, nearly 50 insurers, including UnitedHealthcare, Aetna, Cigna and Humana, signed a voluntary pledge to streamline prior authorization processes, reduce the number of procedures requiring authorization and ensure all clinical denials are reviewed by medical professionals. "
https://www.beckerspayer.com/payer/one-year-after-ceo-killin...
When one gets fired, quits, retires, or dies, you get a new one. Pretty fungible, honestly.
But yeah, shooting people is a bad decision in almost all cases.