https://www.bbc.com/news/articles/czx91rdxpyeo
https://www.cnn.com/2026/04/10/tech/suspect-arrest-openai-ce...
[0] https://www.reddit.com/r/ChatGPT/comments/1shugf8/firebomb_t...
We pour tons of effort into punishing visceral, direct violence like a stabbing or shooting. But if white collar crime is being committed that leads to the death of hundreds of thousands of people, it's rare that anyone sees jail time. Maybe you could argue the decisions of Brian Thompson made only account for maybe 10% of why XYZ died but when you scale that out, you could easily argue this to be a form of white collar mass murder.
I think the younger generations are increasingly aware of this disparity in justice. If you find it hard to understand the celebration of violent vengeance but don't feel the same inability to understand the celebration of Jeffrey Doucet's retribution, then perhaps you are lacking the sociological imagination.
[1] https://www.pewresearch.org/religion/2026/03/05/in-25-countr...
Nobody likes how insurance companies do business, but that doesn't make it "crime".
Things have deteriorated lately, and the population does not see the justice system as effective.
It is completely expected that we see vigilantism, but it is in no way extrajudicial.
The way they "delay, deny, defend" as a matter of course shows a lack of a good-faith execution of the insurance agreements, to the point that a sane world would understand it as extremely obvious (and documented!) fraud. Sure, it is de facto not fraud, but tell that to someone who didn't get insurance payments which they were owed to pay for life-saving treatments (or, I guess tell it to their grave).
From a "we live in a civilized society" perspective, I can see why some people are outraged about his killing.
Finally, looking at the balance sheet of his accomplishments, I can also see why the pitchfork crowd is cheering.
https://en.wikipedia.org/wiki/Fist%2C_Stick%2C_Knife%2C_Gun
People become okay with vigilante justice when they see the executive branch as compromised, just look at the insane plot/ending of the film Singham.
Many people see this happening in the US. We should expect to see more vigilante justice and organized crime if we see the executive branch as having a significant principal-agent problem.
Religious institutions had some access to legitimate violence in a way that the state couldn’t control. Once authoritarianism gave way to more democratic governance, that effectively disappeared.
Organized crime is also going to escalate as the economic squeeze continues to hit white collar workers. Pumping out a bunch of computer science graduates and rendering them unemployable isn't going to lead to all of them giving up and working at Walmart. A certain amount are going to figure out that they can make a better living by going black hat. Likewise for all the office managers, etc. who are put out of a job as belts tighten. Threatening the livelihoods of people who were led to expect a certain standard of living and who can organize and exploit systems is exactly how you end up with organized crime. Doubly so when the burden is falling on the young, who have more appetite for risky decisions.
What does that even mean?
And the growing class divide in the USA I think is the reason why folks are increasingly seeing violence against the upper class is seen as the only option.
Again doesn't mean it makes it right, but it explains why it is almost only an US phenomenon.
Genuine question: is it?
https://carnegieendowment.org/research/2025/09/nepal-gen-z-t...
> [...] not defending [the people who "seem to genuinely be OK with violence"]—or even Luigi (the one who carried out the violence in question)— [...]
When people feel that law and order do not protect them, some eventually will go "the extra mile" (somehow managers always like this phrase). It's not something we can prevent. It is human nature. I guess super riches really like AI because this gives them extra protection.
What to you mean historically? Violence backs every government decree from speeding tickets to the maximum water flow rate of urinals.
Overwhelming violence is something that people will go to amazing lengths and spend nearly all of their economic surplus to avoid.
Could you explain what packages are and what depends on (what?)?
> Historically violence has been a very...effective tool.
This is dramatic sci-fi for anarchists of all political stripes.
The critical reality to understand is that violence is the most ineffective tool, causing catastrophic harm for others and outcomes that the perpetrators rarely control or foresee. Revolutions can overthrow status quo power but what follows is rarely what the perpetrators aimed for. The same happens in warfare - the outcome is rarely what anyone envisioned at the start, a fundamental lessons that experts try to teach hot-headed amateurs that think warfare will solve their problems.
It also establishes violence as legitimate - usable by everyone else too, a very bad outcome and the opposite of the rule of law, incompatible with freedom; it elevates violence and destruction over life and liberty. In contrast, the American Revolution was founded on principles of freedom and law (for example, in the Declaration of Independence), did not embrace violence as desireable, and laid it out for example in the Declaration of Independence.
The most successful societies have freedom, the rule of law, and allow violence only as a last necessity to restore freedom and the rule of law.
That's pretty rich, since the United States only exists thanks to systemic, deliberate violence on a mass scale against the local population.
Packages = ways to "adapt" to the challenges of the world.
Rule of law - in this case, international law - has governed the Strait of Hormuz and relations between the US and Iran for decades. It's not magical or fantasy at all, but a very well-established and effective mechanism that has been the foundation of the most peaceful world arguably in human history. There is no valid argument that it doesn't work (saying it hasn't worked 100% of the time is not valid).
The Trump administration explicitly aims to destroy that rule of law. I think that's why they attacked Venezuala, Iran, civilian boats, etc. Stephen Miller advocates that power, not law, rules.
You can see the outcome when international law was used, and the outcome when it is intentionally destroyed: Look simply at the Strait, which had free navigation under international law, despite the extreme emnity between Iran, and the US and its Mideast allies.
And now, with international law under assault, free navigation has ended. To be clear, I don't only mean the US's and Israel's attack: Developing nuclear weapons would also violate international law, and maybe so does developing highly enriched fissile materials (e.g., uranium). I'm not sure about sponsoring insurgent proxies in other countries, but that has long been practiced by many countries, including the US and many in NATO.
The rule of law allows societies to function. We don't want the world or our communities to function like failed states - those people are poor, starving, and brutally oppressed.
It's not just Trump. Trump and Biden both shredded the rule of law for Israel. I think both parties being captured by a genocidal foreign government has caused mass dissolution with the ability of the US to act within any framework that brings justice.
Enlightenment and industrialization created societies that were fairer, wealthier, and more free than anything before. They also created ideologies such as communism and nationalism that killed hundreds of millions. If your ideas are good and successful in the long term but create poverty, suffering, and feelings of unfairness in time scales people care about, there will be violence.
Compromises are the key tool in preventing violence. Unfortunately, the word itself carries negative connotations in too many languages, making effective compromises less likely.
Especially when the answer to every "well why doesn't it work this way" you could possibly ask seems to come back to "state violence has put its thumb on the scale of society". The government or "the ruling order" or "the system" (whatever you want to call it kind of brought this on itself by taking so much crap under it's umbrella
The ugly, uncomfortable part is that when a certain fraction of people decide violence is the answer, a tipping point is reached and that's what happens. Historically, people have reached that point en masse without a great deal of provocation. So for a society to remain successful--or to remain at all--it needs to prevent this tipping point from happening. Force alone can't do that.
It's an interesting exercise to compare these threads.
My own position on the matter is the not an edgy one: political violence of any kind, is never justified, but it does signal that something deep in society requires a change.
* one exception being defense of life and limb.
- Go onto a Reddit thread about ICE, everyone in the comment threads says they don't like ICE. That's the obvious statement, not edgy.
- Go onto a Reddit thread about Trump, everyone says they don't like Trump. That's the obvious statement, not edgy.
Why would we think the Sam Altman thread is any different? I unfortunately think the Reddit thread might be the real deal, or at least a little more real than you are saying.
As income/wealth inequality grows expect class violence to grow until there is a revolution. We let rich people get too rich and this is the consequence.
Sam has so far lost say $100B so far, and he is compensated by already being a billionaire. You can see how this might lead to disillusionment with the system.
Or I missed a hint and you’re dehumanizing them?
Theres example after example of people in history being totally fine with violence against human beings.
- High wealth inequality
- Perceived inability (or reduced ability) to get ahead and have your voice heard
- Government seen as more corrupt and benefiting the elite. Different set of rules for them vs for everyone else
- Highly polarized population at odds with each other
So the temperature has been high for a while and he's on board with it.
Ineffectual molotov cocktails are just a cry for help.
Flip it again: as a crazy, isn’t it reasonable to enact violence against Johnny Nine Nines? If he’s so innocent, how come his house is behind two security fences?
To be a little more reductive: my house is made of gold bricks so I hired an extra-legal anti-marauder militia, but now the marauders see me as a fair fight because I chose extra-legal militia instead of cops and judges… game on and QED.
If you call one violence but the other is okay because there are some layers of misdirection in between you may have to reconsider your ethics.
Playing catch up with Elon.
Temperature is certainly going up, but it definitely hasn’t reached historic levels yet lol.
Because Anthropic just lost their US government contract (AND got slapped with a completely false order that prevents them from working with any government agency) because they wouldn't do the above ... and then OpenAI slid right in and said "yeah, we can do that".
Useful work like generating mountains of deepfake misinformation?
Lol, really? You think there is any chance of that happening in this current political climate? Any whisper at all of rights for workers is immediately shot down as Godless Communist rhetoric.
The data has conclusively proven that moneyed interests prevail over the interests of the people. Every single time.
no, the mob is forming at the gate, and they are starting to climb
Thinking something should be done, means nothing is being done. The poor in france didn't start with bread riots. They begged and pleaded and asked nicely first, and while lots of people thought something should be done to help them, nothing was.
Thank you for getting over the line.
You are unequivocally wrong. You probably mean 'similar' instead of 'literally the same'.
People will take actions when the threat is against their livelihood, health and homes, particularly when there is no action being taken on their behalf. Their risk assessment may be different than yours.
Too bad she never said it, though.
Which is very AI forward.
You think people will put up with wildly accelerating inequality forever?
It’s going to explode, the only question is when.
No. Nor do I think they should. But UBI, higher income tax at the top and a wealth tax for the ultra rich sound like a much better plan to me than to blow a bunch of things up.
But when I look at how the US handled previous rounds of globalization and automation, I have very sober expectations for our ability to pursue the "happy path." Still, one has to try.
It's not that people are accepting of violence. That doesn't just happen. Societies don't suddenly turn violent against the state. This only happens when the state has failed and become violent towards the people. If you're surprised by the rising level of violence toward the state, you haven't been paying attention to the rising violence towards the people.
The US was quite literally founded on the idea that it is an inarguable, fundamental human right to overthrow a tyrannical government. The nice and polite mechanisms for doing this have all been broken, removed, violently suppressed, or outright ignored. When there are no peaceful options left, humans will always revolt with as much violence as is necessary. History shows us this over and over. Violently oppressed societies don't tend to stay that way for long, and they certainly don't become hardline pacifists. They always eventually fight back, or they die.
The rising level of violence from the people at large is a proportional reaction to the increasing level of violence against the people. The level of tyranny has recently upgraded itself from merely an existential threat to the USA as a society, but also an existential threat to the entire damn planet. Of course the people are going to get violent. They feel there's no other choice, because all peaceful options have been exhausted and met with extreme violence.
That's the consensus I see on the street: all nonviolent options have been met with ever-increasingly extreme violence. When all peaceful options are removed, you pick the only one left.
In a historic lens, it's all very unsurprising. This is how revolutions happen. This is what humans have always done when met with tyranny and violent oppression. It's only surprising if you willfully ignore and excuse the tyranny and violence against the people.
- Some if not many jobs are at risk.
- AI Psychosis is actively tearing apart families and communities, after social media and opioids have already had a pass.
- Negative social outcomes are in the service of _making money_. Not money to pay taxes to fund a healthy society, but money for the people running these systems.
Humans that lack community, safety, and purpose will embrace more drastic means of exerting control over their lives at the expense of others, no?
It is probably safe to say the temperature has been firmly up for a while. And certain subsets of the population have come to trust their Dear Leader's embrace of violence as a solution, for sure.
Hell, even the president regularly calls for and promotes violence, so I don't think it's that much of a minority. The US was founded on it, after all.
Most of it. Across the political spectrum.
> even if not to such an extreme
That’s precisely the point. There is a massive difference between doing or aiding and abetting such behavior, cheering it on, and giving into the impulse of “couldn’t have happened to a worse person” before self correcting. There are a few saints who reject the violence at first glance. But most people are in that self correcting phase, and the correction happens the more they learn about the specifics of the assault.
> even the president regularly calls for and promotes violence
To what numerical end?
How many people live in Iran, who he just threatened to genocide? How many people hold views that Trump thinks make them his enemy, including being critical of ICE? How many immigrants are in the country? It's going to be a very large number, no matter how you slice it.
But some examples: Jan 6, the attack on Paul Pelosi, every ICE agent.
Personally, I've also received multiple death threats from his supporters.
Zero dispute. I’m challenging the notion that Americans are rising to that call. (Or cheering on specific attacks, versus general notions of violence.)
In a weird way, maybe social media helps in this one instance. We can’t let the enemy be faceless. There is no glory in shooting a specific mother or nurse.
It's only a minority of people who are radicalized, but it's a growing minority. Radical ideas are more accessible than ever for people to latch on to.
Radical views on violence, social relations, science, politics, distrust of institutions, etc are all way more common than they were in the 90s.
I’d want to see this interrogated with rigor. The alternate hypothesis, and my null, is a relatively fixed fraction of folks is more connected and visible today than before.
Here's one of the posts on that thread: "I mean one thing is to use AI or even ChatGPT as a product, and another is being aware of how billionaires treat the rest of the people
As for Sam, he also has pretty controversial views for how this whole thing will pan out and how he doesn't give a shit about the consequences it might have for the rest of us. Also more recently, the whole Pentagon contract thing"
People can both use LLMs whilst having a distasteful view of the leaders of the industry.
I have a tremendously distasteful view of a lot of Silicon Valley leadership. Doesn’t mean I want them to suffer at the hands of vigilante justice.
I see no evidence of this. We didn’t see it after Iraq. And Luigi predates all this.
These aren’t organized political movements. They’re lone actors reaching breaking points. That don’t need a theory of violence, just access to guns and a day of mental instability.
All this violence against the innocent in various places and levels, and you think it’s weird that people are fine with violence used against a billionaire conman?
There’s a lot of scary shit going on.
In the US, health insurance is largely tied to employment. Health insurance, in a personal economic sense, reduces to being able to pay for healthcare. This policy is largely a left-over of World War II era employment policies. No one is taking healthcare _away_ from anyone (strictly speaking), but the ability to be able to _pay_ for healthcare is reduced to zero when employment ceases. Accessing the safety net is a separate skillset. This skill set becomes more difficult to achieve because the political class does not want to provide healthcare for everyone, only the worthy (their loyal voters).
I grew up in and am still a member of the precariat. I am educated and doing well, but I wear a well-polished pair of golden handcuffs due to how my ability to afford healthcare for myself, and my family, is tied to employment. Politically, I _do not_ like being tied to my employer by such a chain, but my arguments to change the system have been met with quite firm push-back.
ai decisions health insurance
Also, to be clear, I don't think violence is the way to confront the oligarch sociopaths. There is clearly enough momentum to fix a lot of the monopoly / anti-consumer issues over the next 4-8 years. Assuming Trumpty Dumpty doesn't try to put our military at polling places or some other anti-democracy putinesque bullshit like that.
https://www.palmbeachpost.com/story/news/healthcare/2026/03/...
AI isn't needed for insurance to fuck anyone over.
You know this, stop pretending otherwise.
2. Robots take away jobs from Americans and the proceeds to go the owner (investor) class
3. Americans no longer have healthcare
Understand?
Needless to say, we have discovered that productivity gains are not consistently converted into reduced costs and work hours.
I can also draw pictures of how dangerous humanoid care can be, as there is a possibility in a break in the chain of responsibility. If a human medical professional messes up, you (or your survivors) can sue and seek damages directly, as well as sue the hospital and insurance system (with mixed results).
With humanoids? Currently, the bar is higher as the entity being sued is not the hospital, nor a person, or even a team. The only entities that can be addressed are the corporation the runs the hospital and the corporation that produced the humanoid. These two entities have an incredible out-sized advantage in terms of sheer delaying tactics, not to mention arbitration clauses and other legal innovations. Most injured will simply give up, which is a legal win for the two entities.
In my opinion, humanoid care will take a large amount of time, damage, and treasure to lower the costs. No actor will willingly give up their cash flow. My view may be too strong.
More likely near-term states are less rosy, given intelligence takes off.
This is to say, doctors protect their own professional interests and would never permit this.
It's worth keeping in mind that in the U.S. the health marketplace is extremely complicated and cannot be analyzed with simple demand/supply graphs.
The billionaire class has enabled armed masked police in our streets, endless layoffs, basically don't pay taxes at any reasonable percentage, and basically have rigged politics with Citizens United.
Given that, I can see how people are resorting to 18th century French tactics.
Net of transfers, 60% of households receive more from government transfers than they pay in taxes.
The idea that rich people don't pay taxes is just not correct. The entire system is basically rich people subsidizing everybody else through byzantine distributional systems.
Stop acting as if taxation if theft, it’s the fee that allows everything else to function.
The primary role of the state is to protect private property, why not charge by value?
If we have a pool of $100 and I take $99 and you get $1, and then I get taxed $5 and you get taxed $0, I still have almost everything. Is this.. unfair to me?
It's in fact the opposite of what you said: everyone else is subsidizing the rich, who have gamed the system to live extravagant lifestyles. Eventually this will lead to a revolution and all us rich people will be beheaded. It's the normal outcome of this sort of thing.
I get that a lot of people think people's unrealized capital gains should be taxed, so maybe the argument you're making is something like:
"People with very large paper-gains based on appreciation of the market-value of the assets they own pay 0% taxes on those unrealized gains"
In which case, yeah, that's definitely true. But if they sell those assets, they pay taxes. Some of the taxes from those sales can be offset by doing things like donating enormous sums of money to charity. And sometimes people take loans against their equity, which is not a taxable event. Though, in order to pay those loans back, they have to sell something (taxable) or earn money elsewhere (also taxable). So loans are tax deferral...
But eventually the tax man comes for everybody.
(you are wrong)
On every metric, people in all income brackets are earning more on both a gross and COL-adjusted basis. It is the case that top quintile income has increased more than bottom quintile income, but a faster relative increase does not mean the other group is getting poorer.
The other very interesting thing is that there is statistically not really a "upper ranks" and "lower ranks". The majority of people in the 1% each year are there for the first (and often only) time. And a very, very small percentage of people in the bottom percentiles remain there for their whole life.
Some interesting research:
* 12% of the population will find themselves in the top 1% for at least one year
* Nearly 70% will spend at least one year in the top 20%
* More than half will have at least one year in the top 10%
* While 12% may reach the top 1% at some point, a mere 0.6% stay there for 10 consecutive years
All of that is to say, the idea that there are is some entrenched upper class waging war against some entrenched lower class is just empirically not true. If you dig through the data what you'll find is:
1. People who are just entering the workforce don't make a lot of money
2. As people spend time in the workforce, they make increasingly more money
3. When they retired, they start making less money but tend to have assets to live on
It's far more dynamic than most people's intuition leads them to believe.
I constantly see posts focused on high earners already paying tons of tax. They do, but this should reinforce the point that the ultra wealthy should be paying more tax. People aren’t saying the guy on £500k should pay more, they’re saying the guy with £100m in assets should be.
The sheer tone-deafness of AI marketing is going to come back to bite us very hard. This is probably just the beginning.
And I have no sympathy because this joker has been pushing people to the edge with his hyping.
What kind of weird world are you living under...
The job market's shit, it's nearly impossible for young people to buy houses or pay rent, well paying jobs are disappearing to AI, inflation is sky rocketing and people are getting desperate. But then we're told the economy's doing great and billionaires like Musk and Altman are rolling in money.
Seriously shocked that this is the aspect of this moment in history that you choose to focus on, and not the absurd levels of violence perpetrated by the ruling classes against common people.
But where people are "OK with violence" is with state violence.
State violence include police violence (>1000 people are killed every year in the US by police), prison violence, violently rounding up immigrants and putting them in concentration camps, criminalizing homelessness, denying people life-saving medical care, evictions while landlords collude to raise rents, genocide, sending random people to a maximum security prison in a foreign country (ie CECOT), mass shootings, going with a firearm to a protest to instigate an incident and get a legal kill, intentionally creating the opiod crisis and so on.
For a large number of people some or all of these incidents will get a reaction somewhere between "thoughts and prayers" and "no, it's good actually".
Compare the state's reaction to one healthcare CEO being murdered and the perpetrators that are implicated in the Epstein files. Epstein himself was known to authorities since the 1990s and got an absolutely sweetheart deal in 2008.
So I'd say the real problem is what people view as violence and who's allowed to do it, seemingly without oversight or consequences of any kind most or all of the time.
They did not want a target painted on their backs or being involved with the company responsible for mass job displacement.
Let's hope that SF doesn't turn into a free-for-all after the IPOs, since the silliest thing is for everyone to move to SF and buy up the houses and then the have-not's realise who got rich.
I'd donate that money away or give the employees (who have nothing) a one-time bonus / raise like the five-guys owner [0] to not be a target.
[0] https://www.theguardian.com/us-news/2026/mar/27/five-guys-ce...
The world was never a wise and virtuous man's paradise, but it has been quickly sliding into ever increasing and monstrous irrationality. Give Plato's "Republic" a read and you might find it concerning how closely we exemplify the last stages of political and social decline.
If Taylor Swift owns a dozen homes, does she have full time security guards at each one? Or just accept some amount of burglary may occur? Do they go everywhere with a guard? Only to public events?
The silent or unknown ones will often still have something (usually a requirement of their or their company's insurance).
Once you graduate from "2, 3, 5 houses" to "mansions" you will have staff at each one, even if relatively bare-bones.
To hear tell from my coworkers that did go on site the security was insane, the media apparatus was insane (like a dvr for every channel running 24x7 so the family could call up whatever, wherever they were at any time). This is back in like 2010ish, before the marriage blew up.
From https://edition.cnn.com/2025/05/13/entertainment/kim-kardash...
> Kim Kardashian, testifying in the trial of the burglars accused of tying her up and robbing her at gunpoint nearly nine years ago, told a Paris court on Tuesday that she “absolutely thought” her assailants would kill her.
> “I have babies, I have to make it home, I have babies,” Kardashian recalled pleading with the armed men, who had broken into her hotel room while she slept during Paris Fashion Week in 2016.
> Facing her alleged attackers for the first time since the heist, the billionaire reality TV star detailed how she was robbed of nearly $10 million in cash and jewelry, including a $4 million engagement ring – gifted to her by her then-husband Kanye West – that was never recovered.
https://www.sfchronicle.com/crime/article/molotov-cocktail-c...
So the arrested suspect is either the wrong person, did not actually want to kill anyone or has no clue how fire spreads.
A strange incident that will make many people think of sending a noose to oneself (where oneself does not have to be Altman, but a pro-AI org who wants to generate sympathy).
Only minor damage was caused to a metal gate far from the building. Yet people here speak of "bombing Altman". So the sympathy works, and might work on the jury in that trial as well.
For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it.
Everyone grew up with an understanding to “never trust the random internet content for 100%”, now we’re trying to say that AI has to be 100% reliable.
Bold assumption
https://www.pcgamer.com/software/ai/anthropic-tasked-an-ai-w...
The described action seems performative and emotional, as it they were ideologically opposed to AI. Like spitting out food because it was prepared by a caste you found unclean.
Where does this come from?
cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing.
To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb.
Let's see what gemini says in response to a more realistic prompt: https://gemini.google.com/share/f0bcbe46c337
Well, look at that. 1.5 lbs of chicken breast in the oven @425 for 10 minutes, and a minute or two of broiling should do the trick.
Unlike all human-written recipes I found, it doesn't give the temperature to cook it to.
IF someone is to the point of worrying about AI recipe risk for chicken, they should have already rejected any food made by amateur or professional cooks due to excessive risk.
But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine.
Why are we outsourcing creativity and research and interest in discovery to an llm?
Let's not assume different people find the same parts of the process enjoyable.
I also discovered the best way to go to an art museum is to walk through with AI, taking pictures of each piece of art. It will tell you the historical context of its creation, a 1 page summary of the most facinating facts. It is like having a team of 100 art history professors in your pocket.
This is also weird. I hate planning vacations, but I like going to them.
Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice.
Look at what you're writing.
"Doing X is so clearly irrational that I chuckled a bit."
Please don't perpetuate the image of the elitist techie. That is what was just firebombed.
I think this says more about how neurotic and paranoid people are.
I could totally believe uneducated or less well-adjusted people reacting in the above way, though.
Look at the grok datacenter in Memphis for one example. The "move fast and break things" mentality in this arena isn't about code anymore, it's being applied to communities.
A) How many other datacenters with similar problems can you name?
B) How does this industry compare to every other one on earth, and then look at the disproportionate hate this gets compared to other industries that are substantially worse.
When have they bragged about this?
Sam's got 3 billion net worth.
Jensen's got 165 billion to his name.
They are giddy about taking jobs away, and both are engaged in "tax reduction strategies" and suck up to Donald Trump.
You wonder why people are pissed?
This is just your interpretation. My interpretation is that they talk about computers being able to perform some intellectual tasks that are now handled by humans in a more efficient and fast manner. They're excited about technological progress and new opportunities it provides, not being "giddy" about unemployment and economic uncertainty.
> suck up to Donald Trump
When you're a head of multi-billion dollar company that employ thousands of people and respond to board that expects your company to continue growing and make money for them, it's strategically dumb and irresponsible to NOT suck up to the most childish and vindictive person who has real power to screw over you and all the people you employ who expect you do everything in your power to prevent this.
If you don't do that and Trump fucks your company over as a result, you're just bad at your job as a CEO.
I think AI is net-good. But every frontier lab founder has said, paraphrasing, “Things might go horribly wrong, a lot of people might lose jobs, or maybe we’ll have even better economy, and people will prosper. We can’t operate based on the former, because if our adversaries out-invent us, we’re screwed.”. Like all AI-adjacent companies talk about long-term savings, increasing productivity, needing less people to do the same jobs and etc. Obviously fully employed people, especially the ones with things to lose don’t want it?
Also, this is not a uniquely American thing. People in China are going through the same stuff.
People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.
Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.
At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.
And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.
I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.
And quite a few of them like to mix their religion with politics.
The two things they told us not to talk about at the dinner table in order to have a better experience.
Maybe it was solid advice after all.
Wonder why that is, and if we'll grow out of it peacefully.
My blue collar work buddies don't feel as strongly or as existential about it. To them, it's just this buzzwordy crap that has ruined entertainment or made the quality of services even worse. It's more of an annoyance than an outright fear and/or loathing of it.
Maybe if the bubble pops and the economy tanks and it affects their bottom line they might hate it as much as the aforementioned people.
turning myself (an overweight bearded guy) into an animated hula dancer and turning my coworker into the Terminator and sinking into molten steel don't seem to inspire the same hatred. unless you don't like hula dancers.
I work in a non-tech industry and I see this all the time from people, but it's not just limited to AI. SV itself evokes hatred in a lot of people on both sides of the spectrum.
I can't repeat the worst things I've heard, but Altman and his ilk should be terrified of the mob violence they're instigating.
The tech people are the ones that have the strongest opinions one way or the other.
Same can be applied for DC (politics/military), NY (finance), LA (entertainment).
SV was around since the 60-90s but didn't get much attention until beginning of this millenium due to the huge value creation and control they had over the US economy. They just happen to be relevant in recent times. 100s of years ago it was the railroad and oil conglomerates, 1000s of years ago kings and feudal lords. So there is always a powerful influential class who controls the strings -- its a feature of human society.
(I don't feel at all confident in that statement; I am requesting reassurance.)
Everyone noticed - and of course I've seen it from the other side, too, many times. You can't hide when people are together who don't want to be. That always shows.
[1] https://gwern.net/doc/psychology/personality/psychopathy/194... - despite the filename, this is the 1988 edition. I like my paper edition (I made my paper edition) but the PDF will serve well enough for your reference here.
Mentioning "AI" in non-techies circles is a bad idea. It tells you that many here are in a massive bubble and unaware of the visceral hate against AI because it directly affects them and they cannot opt-out.
Given that AI takes more than it gives back (jobs, energy, water, houses) of course you will get anti-AI activists.
Not entirely unwarranted given the track record of LLMs as a chef though:
https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...
https://www.bbc.com/news/articles/cd11gzejgz4o
Of course it was two years ago and it's unlikely to happen again, but that's the drawback of the “move fast and break things” attitude: sometimes you've broken public perception and it's hard to fix afterwards.
That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.
https://news.ycombinator.com/item?id=47659135 https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
That's literally the same argument that the blockchain gurus made, and each following year it was still 5 years in the future. I'm getting strong Real Soon Now™ vibes.
Lots of AI tools already add actual value and they're only getting better. Every software dev I know uses Claude at some level. Whether it will be the next trillion dollar unicorn might be overhype, but in terms of demonstrating its general utility, it's already there. No need to wait 5 years.
here is my leve:, try to get claude to properly perform some boring boilerplate for me until the tokens run out or i get super angry and then do it myself in a rage.
That’s what’s coming. Like it or not.
Source: https://rb.gy/wdzmsc (YouGov poll, n=2,646, date = sep 10, 2025, question = "Do you think it is ever justified for citizens to resort to violence in order to achieve political goals? (%)", raw data linked under poll graph, downloadable )
It's currently in vogue for "the other side" to decry political violence; of course they will say it is not justified in a 2025 poll. The only problem is that it is blatantly contradicted by the political violence which is perpetrated and excused by "the other side".
I’m not sure how much this question actually measures propensity towards violence, than it does creativity and imagination.
The idea that AI will bring an age of abundance may be true, but not in the short term. Companies are letting people go, and AI will be blamed for that, whether true or not. For decades the public perception that most Tech Bros have prioritized profits over the wellbeing of the little guy is well established, in my view, in some cases well deserved with no accountability.
It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.
Tech Bros can avoid this by modifying their priorities, prioritize employee rights and lobbying governments to begin implementing some sort of Universal Basic Income of some sort and or provide the means by which people can survive, or the government may start marketing Soylent Green to consumers :(
The outlook from my view, though limited, is so pessimistic that I just keep thinking about a scenario similar to, though not AI-related, Soylent Green or Great Ravine from The Dark Forest, Volume 2 of Liu Cixin's Remembrance of Earth's Past trilogy (aka The Three-Body Problem series).
I keep hoping I come across a 1973 Ford Falcon XB GT Coupe aka "V8 Interceptor" or "Pursuit Special," when all hell breaks loose :|
It's worth remembering that the way that ended was extremely bloody, particularly for the Luddites themselves. There were a handful of extreme participants, there was a murder, and there was a hell of a lot of violence directed at anyone perceived as a Luddite— even though most actual Luddites themselves mostly avoided violence against other humans.
It would be good if we can somehow avoid such outcomes this time.
I once had the chance to be a Bro, far richer than any of the current ones, thanks to the still secretive and anonymous "original-sn-adjacent cryptographic collective". Things, however, did not work out in my favor thanks to other nefarious third-party actors. So, I know where from I speak.
Any outcome is in the hands of the Tech Bros but by the looks of it, greed drives their every action, so things are not looking good!
:(
We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.
I predict that we'll see a movement start where people who get diagnosed with a fast acting terminal illness that gives them a few weeks to months of relatively high functionality followed by a quick downward decline -- like say a brain tumour decide to kamikaze against the people they feel have wronged them and their kin gravely.
People will use something like this[0] to evade detection but won't really give a shit if they get caught because they'll be dead in a few months.
Even if they don't have access to such technology they can always just use a firearm like we've seen people try on Trump and Charlie Kirk and that Healthcare CEO guy with relative success.
I'm amazed that Peter Thiel is giving talks about the antichrist at the Vatican. I've seen relatively recent videos of him walking down the street with only a security guard or two[1], and they seem completely unprepared for any sort of attack on them from someone with a firearm or a drone.
It's like these people genuinely don't understand how destructive their actions are viewed as by society and the bubbling resentment and rage that is growing towards them.
I'm not sure what the defense against such a movement is. I guess maybe fixing wealth inequality and giving people at least the impression of greater participation in our democratic system?
This[2] is the vibe right now and it's only growing stronger by the day.
[0] https://www.youtube.com/watch?v=qrZ1aH5gtMU
In real life the attacks in response to climate change(and in this case, economic injustice), will be committed by such an uncountable plurality of groups that the violence will seem almost capricious.
oh nooooo. anyway
to the people on HN who are against blockchain but bullish on AI
With blockchain and smart contracts or stupid even memecoins, you can only lose what you voluntarily put in. You had to jump through a few hoops, then maybe you got rugpulled, maybe you became a millionaire.
With AI, regardless of whether you consented or not, you can lose your job, gradually your relationships and sense of purpose. And if some malicious actors want to weaponize it against you, you can lose your reputation, your freedom, get hacked at scale, and much more. The sooner we give biolabs to everyone the sooner someone can create an advanced persistent threat virus online infecting every openclaw machine, or a designer virus with an incubation period of half a year.
And I know what someone on here will always say. There will always be a comment to the effect of "this has always existed, AI is nothing new". But quantity has a quality all its own. Enjoy your AI slop internet dark forest. Until you don't.
Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.
@dang didn't see this post before posting the archive.ph link at https://news.ycombinator.com/item?id=47722344 - feel free to delete/merge that thread with this one
All that is being "promised" are vague claims of "abundance". But all I see is this:
"AGI" is going to bring abundance of lots of very angry people and UBI to no-one (because it can never work at a large sustainable scale).
Some people are starting to realise that "AGI" was a grift and a scam and they are not happy about this lie and the insiders knew that and increased spending on security and private bodyguards.
It's hard to say it's not X when we can't really define X.
Agentic changes the calculus.
Also every AI company is motivated to have us use their models _just enough_ to want to pay for them, but not more than that.
Maybe they are just not reporting near misses
That’s like saying we don’t need minimum wage or unions because companies choosing to treat workers with respect is also a possible outcome. It’s technically true but once you go from “is this theoretically possible?” to “is this likely?” it becomes obvious that the answer is no. Most of the big AI backers are openly salivating at destroying millions of jobs, and they’re already evading taxes now so they’re not going to be funding UBI willingly — and if you have any doubt, look at where their political spending goes, consistently to the people who are doing their best to remove what small taxes they’re still paying and declaring war on the concept of regulated markets.
Do you genuinely believe there's any chance that's going to happen?
Covid clearly showed how crisis can only benefit the rich and powerful.
AI being used to cut the headcount can somehow be, good? It will just fill the pockets of the powerful.
Are you also sure that it is unthinkable to those running these companies? I wouldn't be surprised if these models end up being used for internal security-- that people would try to keep an extremely unequal society stable by surveillance and massive analysis capabilities. I think it's apparent that some use of this sort already occurs and that these companies are already participating.