Molotov cocktail is hurled at home of Sam Altman
194 points
6 hours ago
| 23 comments
| nytimes.com
| HN
https://archive.ph/aoXIY

https://www.bbc.com/news/articles/czx91rdxpyeo

https://www.cnn.com/2026/04/10/tech/suspect-arrest-openai-ce...

strongpigeon
6 hours ago
[-]
It is a bit scary how people seem to genuinely be OK with violence (see this reddit thread [0]). Is just me or does it feel like the overall "temperature" has gone up.

[0] https://www.reddit.com/r/ChatGPT/comments/1shugf8/firebomb_t...

reply
lazyasciiart
3 hours ago
[-]
Well, dropping bombs and threatening to end a civilization certainly made me think the temperature had gone up. I’m not sure I think a single attempted act against some guy is worth being worried by against that backdrop.
reply
dan-robertson
2 hours ago
[-]
I think much the reaction to the Brian Thompson killing also seemed ok with the violence despite it happening before the events you describe, though I guess that could be an outlier.
reply
culi
1 hour ago
[-]
I think more and more Americans have what C. Wright Mills called the "sociological imagination".

We pour tons of effort into punishing visceral, direct violence like a stabbing or shooting. But if white collar crime is being committed that leads to the death of hundreds of thousands of people, it's rare that anyone sees jail time. Maybe you could argue the decisions of Brian Thompson made only account for maybe 10% of why XYZ died but when you scale that out, you could easily argue this to be a form of white collar mass murder.

I think the younger generations are increasingly aware of this disparity in justice. If you find it hard to understand the celebration of violent vengeance but don't feel the same inability to understand the celebration of Jeffrey Doucet's retribution, then perhaps you are lacking the sociological imagination.

reply
retrac
58 minutes ago
[-]
I'm reminded of this recent Pew Research poll [1] about whether people believe their fellow citizens are moral.

[1] https://www.pewresearch.org/religion/2026/03/05/in-25-countr...

reply
wavemode
1 hour ago
[-]
What "white collar crime" was Brian Thompson guilty of? As I understand it, he was merely the CEO of an insurance company.

Nobody likes how insurance companies do business, but that doesn't make it "crime".

reply
polishdude20
1 hour ago
[-]
Its less about "crimes" and more about a moral or ethical boundary that people feel is being crossed.
reply
wanderingjew
1 hour ago
[-]
What a crime is is determined by the population. For a very long time, the population has given the idea of a "justice system" to... Well, the justice system.

Things have deteriorated lately, and the population does not see the justice system as effective.

It is completely expected that we see vigilantism, but it is in no way extrajudicial.

reply
lcnPylGDnU4H9OF
24 minutes ago
[-]
> Nobody likes how insurance companies do business, but that doesn't make it "crime".

The way they "delay, deny, defend" as a matter of course shows a lack of a good-faith execution of the insurance agreements, to the point that a sane world would understand it as extremely obvious (and documented!) fraud. Sure, it is de facto not fraud, but tell that to someone who didn't get insurance payments which they were owed to pay for life-saving treatments (or, I guess tell it to their grave).

reply
chaosharmonic
1 hour ago
[-]
That's because Brian Thompson was functionally a serial killer.
reply
disqard
55 minutes ago
[-]
From a POSIWID perspective, you are right.

From a "we live in a civilized society" perspective, I can see why some people are outraged about his killing.

Finally, looking at the balance sheet of his accomplishments, I can also see why the pitchfork crowd is cheering.

reply
typon
1 hour ago
[-]
And the reason for _that_ is because of the callous way American society accepts the deaths of thousands of people who die due to the Healthcare Industrial complex (of which Brian Thompson was a key member of). Just because those deaths don't happen with guns doesn't make them any less important.
reply
scoofy
5 hours ago
[-]
This is exactly the point of part one of Fist Stick Knife Gun: A Personal History of Violence, by Geoffrey Canada. Unequal or lack of access to the executive branch of government will create a culture of vigilantism and lends itself to organized crime as a replacement for the policing arm of the state.

https://en.wikipedia.org/wiki/Fist%2C_Stick%2C_Knife%2C_Gun

People become okay with vigilante justice when they see the executive branch as compromised, just look at the insane plot/ending of the film Singham.

Many people see this happening in the US. We should expect to see more vigilante justice and organized crime if we see the executive branch as having a significant principal-agent problem.

reply
yfw
2 hours ago
[-]
We gave up violence and made the state the authority but thats contingent on the social contract being upheld.
reply
throwway120385
2 hours ago
[-]
We did this in the late 1800's and early 1900's because the upper classes understood that they needed to be afraid of the masses. Prior to that political violence seems like it was the order of the day. The US has always had a pretty strong aristocracy, but the aristocrats were variously either moral people or they at least had enough of a sense of self-preservation that they wouldn't get too greedy.
reply
scoofy
1 hour ago
[-]
One of the most interesting aspects of the policing power in the premodern era was the existence and split of a powerful church.

Religious institutions had some access to legitimate violence in a way that the state couldn’t control. Once authoritarianism gave way to more democratic governance, that effectively disappeared.

reply
Arodex
47 minutes ago
[-]
And then the upper classes stirred racial resentment and sent Pinkerton to rough up and kill strikers.
reply
Mezzie
1 hour ago
[-]
Re: Organized crime.

Organized crime is also going to escalate as the economic squeeze continues to hit white collar workers. Pumping out a bunch of computer science graduates and rendering them unemployable isn't going to lead to all of them giving up and working at Walmart. A certain amount are going to figure out that they can make a better living by going black hat. Likewise for all the office managers, etc. who are put out of a job as belts tighten. Threatening the livelihoods of people who were led to expect a certain standard of living and who can organize and exploit systems is exactly how you end up with organized crime. Doubly so when the burden is falling on the young, who have more appetite for risky decisions.

reply
scoofy
1 hour ago
[-]
When I say organized crime, I don’t just mean intelligent criminals. I mean a culture of loyalty. For organized crime to function, all of the members need to have a system of justice underpinning their actions in order to keep the organization whole.
reply
Mezzie
9 minutes ago
[-]
I agree with you. I think such a culture is more likely to arise when you have people who believe in the idea of loyalty but haven't seen it bear fruit in their lives, and who are used to acting within such an organizational framework, which describes a fair number of the workers who either are being displaced or feel themselves to be.
reply
spaghetdefects
2 hours ago
[-]
I wonder how much the complete impunity of those involved with Jeffery Epstein has destroyed the faith in the executive branch? People like Leon Black, Les Wexner and a couple of presidents not only escaped justice, but pretty much any scrutiny by any institution, media included. I think it's hard for people to look at that and not think they need to take the law into their own hands.
reply
dan-robertson
2 hours ago
[-]
I’m surely out of the loop here but what crimes are there evidence of Leon Black or Les Wexner having committed?
reply
NickC25
1 hour ago
[-]
Lex's entire net worth was managed by Epstein, both before and after the conviction.
reply
spaghetdefects
2 hours ago
[-]
Participating in the child sex trafficking ring, although Wexner's involvement goes far deeper.
reply
sumedh
2 hours ago
[-]
> just look at the insane plot/ending of the film Singham.

What does that even mean?

reply
scoofy
2 hours ago
[-]
Spoiler alert for the film. The film ends, not with any kind of officially sanctioned justice, but with a completely extrajudicial killing, for which audiences are expected to cheer. This is exactly the point of an untrustworthy executive branch getting us cheering for what is essentially organized crime that favors our side over another.
reply
0dayz
3 hours ago
[-]
Not defending them or even Luigi but I would argue a lot of it is the abysmal labour institutions the USA got (lots of union busting, few modern laws against modern exploitation and classical institutions are undermined politically and legally).

And the growing class divide in the USA I think is the reason why folks are increasingly seeing violence against the upper class is seen as the only option.

Again doesn't mean it makes it right, but it explains why it is almost only an US phenomenon.

reply
JumpCrisscross
2 hours ago
[-]
> explains why it is almost only an US phenomenon

Genuine question: is it?

reply
sethops1
2 hours ago
[-]
Certainly not. Nepal's Gen Z literally overthrew their government due to inequality and corruption.

https://carnegieendowment.org/research/2025/09/nepal-gen-z-t...

reply
ekianjo
1 hour ago
[-]
They were "helped"
reply
JumpCrisscross
41 minutes ago
[-]
Everyone is “helped.”
reply
baggy_trough
2 hours ago
[-]
What do you mean by “or even Luigi”?
reply
gattilorenz
2 hours ago
[-]
Luigi Mangione, the guy who shot an health insurance company CEOs in 2024
reply
baggy_trough
1 hour ago
[-]
Yes, but what does the “or even” mean?
reply
culi
1 hour ago
[-]
Transliteration:

> [...] not defending [the people who "seem to genuinely be OK with violence"]—or even Luigi (the one who carried out the violence in question)— [...]

reply
NewsaHackO
2 hours ago
[-]
There is a secret level in Super Mario's bros where you play as Luigi and try to fix Healthcare. Has a boss and everything.
reply
soco
2 hours ago
[-]
"protest without a disruption is just a parade"
reply
hnthrowaway0315
3 hours ago
[-]
I'm not saying that violence is legal -- which is definitely not. But it is part of the "packages" and totally depends on whether the one wants to use. Historically violence has been a very...effective tool.

When people feel that law and order do not protect them, some eventually will go "the extra mile" (somehow managers always like this phrase). It's not something we can prevent. It is human nature. I guess super riches really like AI because this gives them extra protection.

reply
yfw
2 hours ago
[-]
Seems like its legal if you can pay for it today.
reply
cucumber3732842
1 hour ago
[-]
> Historically violence has been a very...effective tool.

What to you mean historically? Violence backs every government decree from speeding tickets to the maximum water flow rate of urinals.

Overwhelming violence is something that people will go to amazing lengths and spend nearly all of their economic surplus to avoid.

reply
Fricken
3 hours ago
[-]
Of course violence is legal. Laws themselves carry no weight if they aren't backed by a credible threat of violence.
reply
krapp
2 hours ago
[-]
Violence by the state is legal. Violence otherwise tends not to be.
reply
yfw
2 hours ago
[-]
See ICE murders.
reply
culi
1 hour ago
[-]
that's violence by the state though. That's exactly the kind of violence GP said are legal (in my reading, no moral stance was taken about this state of matters)
reply
krapp
2 hours ago
[-]
Or American police in general.
reply
mmooss
3 hours ago
[-]
> it is part of the "packages" and totally depends on whether the one wants to use.

Could you explain what packages are and what depends on (what?)?

> Historically violence has been a very...effective tool.

This is dramatic sci-fi for anarchists of all political stripes.

The critical reality to understand is that violence is the most ineffective tool, causing catastrophic harm for others and outcomes that the perpetrators rarely control or foresee. Revolutions can overthrow status quo power but what follows is rarely what the perpetrators aimed for. The same happens in warfare - the outcome is rarely what anyone envisioned at the start, a fundamental lessons that experts try to teach hot-headed amateurs that think warfare will solve their problems.

It also establishes violence as legitimate - usable by everyone else too, a very bad outcome and the opposite of the rule of law, incompatible with freedom; it elevates violence and destruction over life and liberty. In contrast, the American Revolution was founded on principles of freedom and law (for example, in the Declaration of Independence), did not embrace violence as desireable, and laid it out for example in the Declaration of Independence.

The most successful societies have freedom, the rule of law, and allow violence only as a last necessity to restore freedom and the rule of law.

reply
jyounker
2 hours ago
[-]
A lot of people in the US feel like they've already tried the nice way, and it's failed. Given the increasing wealth disparity between the haves and the have-nots, it's hard to argue otherwise.
reply
aegis4244
1 hour ago
[-]
Many, close to most of the "have-nots" just voted to help the "haves" at great cost to themselves. The economic decline across fly over states isn't going to stop. It's going to continue. Resulting in those angry uneducated voters to double down. Those old factory jobs are gone. Unlikely to come back in our or our children's lifetime. They are ideologically opposed to education. Leading to more of the same, just more so. Economically, politically, and educatively.
reply
calcifer
2 hours ago
[-]
> In contrast, the American Revolution was founded on principles of freedom and law [...] did not embrace violence as desireable

That's pretty rich, since the United States only exists thanks to systemic, deliberate violence on a mass scale against the local population.

reply
bdangubic
2 hours ago
[-]
and has continued to this day with violence against non-local populations around the world
reply
hnthrowaway0315
3 hours ago
[-]
I don't know, but just look at Iran and US. Where is "rule of law"? Who is going to give it magically?

Packages = ways to "adapt" to the challenges of the world.

reply
mmooss
2 hours ago
[-]
> look at Iran and US. Where is "rule of law"? Who is going to give it magically?

Rule of law - in this case, international law - has governed the Strait of Hormuz and relations between the US and Iran for decades. It's not magical or fantasy at all, but a very well-established and effective mechanism that has been the foundation of the most peaceful world arguably in human history. There is no valid argument that it doesn't work (saying it hasn't worked 100% of the time is not valid).

The Trump administration explicitly aims to destroy that rule of law. I think that's why they attacked Venezuala, Iran, civilian boats, etc. Stephen Miller advocates that power, not law, rules.

You can see the outcome when international law was used, and the outcome when it is intentionally destroyed: Look simply at the Strait, which had free navigation under international law, despite the extreme emnity between Iran, and the US and its Mideast allies.

And now, with international law under assault, free navigation has ended. To be clear, I don't only mean the US's and Israel's attack: Developing nuclear weapons would also violate international law, and maybe so does developing highly enriched fissile materials (e.g., uranium). I'm not sure about sponsoring insurgent proxies in other countries, but that has long been practiced by many countries, including the US and many in NATO.

The rule of law allows societies to function. We don't want the world or our communities to function like failed states - those people are poor, starving, and brutally oppressed.

reply
spaghetdefects
2 hours ago
[-]
> The Trump administration explicitly aims to destroy that rule of law.

It's not just Trump. Trump and Biden both shredded the rule of law for Israel. I think both parties being captured by a genocidal foreign government has caused mass dissolution with the ability of the US to act within any framework that brings justice.

reply
Longlius
1 hour ago
[-]
The American revolution literally engaged in systemic attacks against British property.
reply
jltsiren
2 hours ago
[-]
The critical reality to understand is that people have always used violence. If they don't believe that they live in a successful society, or if they believe that the success of the society is not distributed fairly (or in a way that benefits them), violence starts looking attractive.

Enlightenment and industrialization created societies that were fairer, wealthier, and more free than anything before. They also created ideologies such as communism and nationalism that killed hundreds of millions. If your ideas are good and successful in the long term but create poverty, suffering, and feelings of unfairness in time scales people care about, there will be violence.

Compromises are the key tool in preventing violence. Unfortunately, the word itself carries negative connotations in too many languages, making effective compromises less likely.

reply
cucumber3732842
1 hour ago
[-]
>If they don't believe that they live in a successful society, or if they believe that the success of the society is not distributed fairly (or in a way that benefits them), violence starts looking attractive.

Especially when the answer to every "well why doesn't it work this way" you could possibly ask seems to come back to "state violence has put its thumb on the scale of society". The government or "the ruling order" or "the system" (whatever you want to call it kind of brought this on itself by taking so much crap under it's umbrella

reply
jcgrillo
1 hour ago
[-]
> The most successful societies have freedom, the rule of law, and allow violence only as a last necessity to restore freedom and the rule of law.

The ugly, uncomfortable part is that when a certain fraction of people decide violence is the answer, a tipping point is reached and that's what happens. Historically, people have reached that point en masse without a great deal of provocation. So for a society to remain successful--or to remain at all--it needs to prevent this tipping point from happening. Force alone can't do that.

reply
tptacek
3 hours ago
[-]
These are message boards. The obvious sentiment, that firebombing attacks are awful (perhaps cut a little bit with "the perpetrator appears to be someone deeply in need of help) is boring. This is an availability bias issue: the only sentiments that actually spool out into threads are edgy. Once you learn to spot these effects, message boards make a lot more sense and are less jarring.
reply
swat535
2 hours ago
[-]
Another good thread to follow is the murdering of UnitedHealthcare CEO Brian Thompson: https://news.ycombinator.com/item?id=42317604

It's an interesting exercise to compare these threads.

My own position on the matter is the not an edgy one: political violence of any kind, is never justified, but it does signal that something deep in society requires a change.

reply
tremon
1 hour ago
[-]
I'm of the view that it's violence of the non-political kind that is never justified*. Political violence can be legitimized, as an option of last resort. There's plenty of historical examples where groups of people were denied every avenue of redress until they turned violent. As an example, read up on the history of most labour unions.

* one exception being defense of life and limb.

reply
johnfn
1 hour ago
[-]
I think this is a little too optimistic:

- Go onto a Reddit thread about ICE, everyone in the comment threads says they don't like ICE. That's the obvious statement, not edgy.

- Go onto a Reddit thread about Trump, everyone says they don't like Trump. That's the obvious statement, not edgy.

Why would we think the Sam Altman thread is any different? I unfortunately think the Reddit thread might be the real deal, or at least a little more real than you are saying.

reply
frinxor
2 hours ago
[-]
And the same applies to HN? Edgy messages make it to the top, and the reader should learn to react accordingly (in what way?)
reply
tptacek
2 hours ago
[-]
Mostly just by not being emotionally destabilized by edgy comments, is all.
reply
zouhair
1 hour ago
[-]
What do you mean by violence? Do you consider someone building a monster of a server farm near your home and messing up with your drinking water, electricity and life in general violence? Why violence is only immediate physical one that counts?
reply
givemeethekeys
3 hours ago
[-]
Silent corruption at the top causes rot at the bottom. Obvious corruption at the top causes desperation at the bottom.
reply
jyounker
2 hours ago
[-]
It's due to the widening inequality. Nick Hanauer has been talking about this for over ten years: https://www.youtube.com/watch?v=q2gO4DKVpa8
reply
layer8
4 hours ago
[-]
It used to be a little less violent: https://www.youtube.com/watch?v=HEMbp6Epfz8
reply
danny_codes
2 hours ago
[-]
GINI index in SF is pretty close to Brazil.

As income/wealth inequality grows expect class violence to grow until there is a revolution. We let rich people get too rich and this is the consequence.

Sam has so far lost say $100B so far, and he is compensated by already being a billionaire. You can see how this might lead to disillusionment with the system.

reply
NordStreamYacht
1 hour ago
[-]
reply
oatmeal1
3 hours ago
[-]
People are okay with violence when democratic means (if first past the post even counts) do not solve their problems.
reply
bdangubic
2 hours ago
[-]
people are never OK with violence against human beings.
reply
adastra22
2 hours ago
[-]
Unfortunately some people are.
reply
DrProtic
2 hours ago
[-]
Yet we live in a very violent world, some people are definitely ok with it.

Or I missed a hint and you’re dehumanizing them?

reply
voidfunc
1 hour ago
[-]
This has to be one of the most naive comments I've read on this site.

Theres example after example of people in history being totally fine with violence against human beings.

reply
ajam1507
1 hour ago
[-]
No need to look at history. We have plenty of contemporary examples.
reply
bdangubic
1 hour ago
[-]
we used to sacrifice lambs to appease the gods but we don’t do that anymore.
reply
sigmarule
2 hours ago
[-]
Have you met people?
reply
bdangubic
1 hour ago
[-]
touche :)
reply
davesque
46 minutes ago
[-]
Yes, the temperature has gone up. And we all know exactly who sits at the top of it all.
reply
yfw
2 hours ago
[-]
What do you call denying healthcare?
reply
voidfunc
2 hours ago
[-]
It's bad but this is what happens when people think they're not being heard and respected. I expect a lot more of this in the future.
reply
yibg
1 hour ago
[-]
Scary but also entirely predictable and expected.

- High wealth inequality

- Perceived inability (or reduced ability) to get ahead and have your voice heard

- Government seen as more corrupt and benefiting the elite. Different set of rules for them vs for everyone else

- Highly polarized population at odds with each other

reply
ZeroGravitas
4 hours ago
[-]
He switched to supporting Trump after Trump repeatedly joked about someone breaking into a San Fransisco home to attack the owners with a hammer.

So the temperature has been high for a while and he's on board with it.

reply
quantified
1 hour ago
[-]
I simply make the observation that the 40-hour workweek took a bunch of violence to enable. As have other forms of progress that we take for granted. Luigi Mangione is a hero to many. It's not bad that the most powerful need to consider negative outcomes in their lives. Decry violence as one, sure, but if there are none other, psychopaths have no check on them. It'd be good if maybe there were others available, eh?

Ineffectual molotov cocktails are just a cry for help.

reply
hungryhobbit
2 hours ago
[-]
Crazy people have existed since the dawn of time: I see nothing at all new here about a crazy person doing something crazy.
reply
tremon
1 hour ago
[-]
Crazy people used to gun down schoolchildren who could be conveniently ignored. You can be sure that the ownership class won't just be sending thoughts and prayers here.
reply
gorgoiler
3 hours ago
[-]
Flip it round: if you have $999,999,999 then would it not be rational to expect random violence against oneself? I’m not saying it’s justifiable, just that it is prudent to expect to be targeted by crazies.

Flip it again: as a crazy, isn’t it reasonable to enact violence against Johnny Nine Nines? If he’s so innocent, how come his house is behind two security fences?

To be a little more reductive: my house is made of gold bricks so I hired an extra-legal anti-marauder militia, but now the marauders see me as a fair fight because I chose extra-legal militia instead of cops and judges… game on and QED.

reply
lukewarm707
1 hour ago
[-]
the more you push, eventually the people will snap.
reply
atoav
1 hour ago
[-]
To play the advocatus diaboli: Violence is always condemned the most if it happens to a member of high society directly. The members many people on this very website picture themselves to be in the future. But if you structually starve half a continent to save a few cents on the dime or fire 30.000 workers that isn't only okay, it deserves a bonus.

If you call one violence but the other is okay because there are some layers of misdirection in between you may have to reconsider your ethics.

reply
yoyohello13
3 hours ago
[-]
Get ready for more. If the tech bros are right and millions of people loose their jobs and healthcare, we are in for a rough couple of decades. Millions of angry people, with nothing to lose and a bunch of free time, all with one name in their heads, Sam Altman. He better start working on his robot army.
reply
lonelyasacloud
2 hours ago
[-]
> He better start working on his robot army.

Playing catch up with Elon.

reply
nothinkjustai
5 hours ago
[-]
I don’t think it’s surprising - some people already consider the actions of AI execs and tech companies to be synonymous to violence. Like, comparing something like this to destroying the livelihoods of millions of people, a lot of people would consider the latter far worse.

Temperature is certainly going up, but it definitely hasn’t reached historic levels yet lol.

reply
_bohm
5 hours ago
[-]
Structural violence is the term most commonly used for this

https://en.wikipedia.org/wiki/Structural_violence

reply
closeparen
3 hours ago
[-]
I do not think that marketing products and services that do useful work is “violence.”
reply
hungryhobbit
2 hours ago
[-]
Illegally mass surveying Americans, and mass murdering people in other countries is "useful work"?

Because Anthropic just lost their US government contract (AND got slapped with a completely false order that prevents them from working with any government agency) because they wouldn't do the above ... and then OpenAI slid right in and said "yeah, we can do that".

reply
hananova
1 hour ago
[-]
What you think does not matter. You need to actually convince the downtrodden, otherwise these attacks will keep happening, and they will get worse.
reply
malfist
2 hours ago
[-]
Useful work like selecting an all girls school in Iran for triple taps?

Useful work like generating mountains of deepfake misinformation?

reply
mghackerlady
5 hours ago
[-]
People are apathetic at this point. When a large amount of americans can barely afford to live while threatened with replacement while the economy booms on the backs of their claimed obsolescence, they don't care that a billionaire could've gotten hurt, especially when that billionaire is working against their interests.
reply
strongpigeon
5 hours ago
[-]
I mean, it's also scary because I don't think it works. People should demand a new deal and lobby for that. Throwing molotovs doesn't help with that.
reply
eschaton
5 hours ago
[-]
What happens when lobbying for a new deal fails? Do the people just shrug and accept the fate their feudal lords have determined for them?
reply
nxm
3 hours ago
[-]
and what happens when people don't want a new deal? Violence is ok then?
reply
lazyasciiart
3 hours ago
[-]
Thats what the Pinkertons were for, yes.
reply
alexitosrv
2 hours ago
[-]
Oh yeah!, I forgot that staying silent and complying quietly is way better!! In 1700s they should have used that instead of guillotines.
reply
Redoubts
1 hour ago
[-]
Yeah, probably? The french revolution sucked ass for everyone involved.
reply
pixel_popping
5 hours ago
[-]
It clearly did open a discourse on HN at least :)
reply
sigmarule
2 hours ago
[-]
People don’t lobby, corporations do.
reply
yoyohello13
2 hours ago
[-]
> People should demand a new deal and lobby for that.

Lol, really? You think there is any chance of that happening in this current political climate? Any whisper at all of rights for workers is immediately shot down as Godless Communist rhetoric.

reply
stackghost
2 hours ago
[-]
>I mean, it's also scary because I don't think it works. People should demand a new deal and lobby for that.

The data has conclusively proven that moneyed interests prevail over the interests of the people. Every single time.

reply
AlexCoventry
3 hours ago
[-]
I don't condone violence, but it's hardly surprising that people would resort to or support it in this case, considering that by stepping in where Anthropic refused to help the US military, sama essentially agreed that OpenAI will serve as the IT Department for Trump's secret police. Either that, or he's willing for OpenAI to endure a similar punishment when he refuses the inevitable demand to assist with domestic mass surveillance.
reply
sophacles
5 hours ago
[-]
You're just a smidge away from asking why they can't just eat cake...
reply
strongpigeon
5 hours ago
[-]
I think you're extrapolating a lot from my comment... One can reasonably think something has to be done to address the current (and upcoming) economic situation and think that molotov cocktails won't help. Acts like these will likely make things much worse before settling into a new situation that's probably just slightly worse.
reply
lukewarm707
1 hour ago
[-]
has the temperature gone up?

no, the mob is forming at the gate, and they are starting to climb

reply
sophacles
5 hours ago
[-]
Wondering why people might want to resist their lives becoming worse at all just so some assholes can gloat about how much richer they became is literally the same as asking why they can't just eat cake.

Thinking something should be done, means nothing is being done. The poor in france didn't start with bread riots. They begged and pleaded and asked nicely first, and while lots of people thought something should be done to help them, nothing was.

Thank you for getting over the line.

reply
bloppe
3 hours ago
[-]
Maybe this is a silly question, but why can't they just eat cake?
reply
lazyasciiart
3 hours ago
[-]
If you’re genuinely wondering; it’s because cake is not a nutritionally complete food and will also not cure cancer.
reply
bloppe
3 hours ago
[-]
I'm pretty sure it's in the cancer-curing section of the new food pyramid
reply
kbelder
2 hours ago
[-]
>...is literally the same as asking why they can't just eat cake.

You are unequivocally wrong. You probably mean 'similar' instead of 'literally the same'.

reply
strongpigeon
5 hours ago
[-]
Being worried that people choose to channel their energy into actions that undoubtedly make their situation worse rather than have a chance of finding a solution is not the same. Or I guess it depends on how you decide to view things as being "literally the same".
reply
sophacles
5 hours ago
[-]
Worry is not an action to making something better.

People will take actions when the threat is against their livelihood, health and homes, particularly when there is no action being taken on their behalf. Their risk assessment may be different than yours.

reply
MiguelX413
5 hours ago
[-]
They don't really have another choice do they.
reply
GOD_Over_Djinn
3 hours ago
[-]
The legal system is owned from top to bottom by the ruling class. You will not be able to use it to loosen their death grip on society. They will not allow it.
reply
malfist
2 hours ago
[-]
And if that's not enough that they own the legal system, they've also setup a shadow legal system where they have even more control called arbitration
reply
ChoGGi
3 hours ago
[-]
I have some lovely brioche if you'd prefer.
reply
rkomorn
2 hours ago
[-]
It is the more suitable replacement for bread, after all.

Too bad she never said it, though.

reply
paulddraper
2 hours ago
[-]
The wild part about that is that the r/ChatGPT sub.

Which is very AI forward.

reply
DoneWithAllThat
1 hour ago
[-]
The replies to your comment help make your point. These people genuinely think violence is fine, inevitable and justified.
reply
testing22321
3 hours ago
[-]
The top comment there mentions the French Revolution.

You think people will put up with wildly accelerating inequality forever?

It’s going to explode, the only question is when.

reply
strongpigeon
1 hour ago
[-]
> You think people will put up with wildly accelerating inequality forever?

No. Nor do I think they should. But UBI, higher income tax at the top and a wealth tax for the ultra rich sound like a much better plan to me than to blow a bunch of things up.

reply
smallmancontrov
1 hour ago
[-]
Yes, and it's not too late! Plus, sama is one of the only ultra rich I've heard talk about policies that could actually help society cope with reduced aggregate labor demand.

But when I look at how the US handled previous rounds of globalization and automation, I have very sober expectations for our ability to pursue the "happy path." Still, one has to try.

reply
hananova
1 hour ago
[-]
The average person can make one of those things happen, and not the others. Yes, the alternative is obviously better, but once violence becomes the only course of action with reasonable chance at good results, violence is what you will get. Just watch, this is going to escalate. A lot.
reply
supliminal
2 hours ago
[-]
There is nothing scary about being genuinely OK with violence.
reply
estimator7292
1 hour ago
[-]
It's gotten to the point that I walked in to some water cooler banter at work the other day, where they were discussing their favorite means of public execution.

It's not that people are accepting of violence. That doesn't just happen. Societies don't suddenly turn violent against the state. This only happens when the state has failed and become violent towards the people. If you're surprised by the rising level of violence toward the state, you haven't been paying attention to the rising violence towards the people.

The US was quite literally founded on the idea that it is an inarguable, fundamental human right to overthrow a tyrannical government. The nice and polite mechanisms for doing this have all been broken, removed, violently suppressed, or outright ignored. When there are no peaceful options left, humans will always revolt with as much violence as is necessary. History shows us this over and over. Violently oppressed societies don't tend to stay that way for long, and they certainly don't become hardline pacifists. They always eventually fight back, or they die.

The rising level of violence from the people at large is a proportional reaction to the increasing level of violence against the people. The level of tyranny has recently upgraded itself from merely an existential threat to the USA as a society, but also an existential threat to the entire damn planet. Of course the people are going to get violent. They feel there's no other choice, because all peaceful options have been exhausted and met with extreme violence.

That's the consensus I see on the street: all nonviolent options have been met with ever-increasingly extreme violence. When all peaceful options are removed, you pick the only one left.

In a historic lens, it's all very unsurprising. This is how revolutions happen. This is what humans have always done when met with tyranny and violent oppression. It's only surprising if you willfully ignore and excuse the tyranny and violence against the people.

reply
schainks
3 hours ago
[-]
People are coming to a logical conclusions that:

- Some if not many jobs are at risk.

- AI Psychosis is actively tearing apart families and communities, after social media and opioids have already had a pass.

- Negative social outcomes are in the service of _making money_. Not money to pay taxes to fund a healthy society, but money for the people running these systems.

Humans that lack community, safety, and purpose will embrace more drastic means of exerting control over their lives at the expense of others, no?

It is probably safe to say the temperature has been firmly up for a while. And certain subsets of the population have come to trust their Dear Leader's embrace of violence as a solution, for sure.

reply
whatever1
3 hours ago
[-]
Jobs were already lost because of AI capital investments. None of the hyper scalers had the cash flow to support the target investment levels and had to reduce labor.
reply
JumpCrisscross
3 hours ago
[-]
It’s a distinct minority. They’re convinced they’re the majority because everyone they talk to is in the same bubble, especially online. I saw the same thing with Mangione and Kirk and Pelosi.
reply
pesus
2 hours ago
[-]
Do you spend much time with people not in the tech world? I think you'd be surprised how many people hold similar sentiments, even if not to such an extreme, especially once you talk to people in the real world. I've heard far more support for this sort of thing in real life than I have online due to fear of repercussions.

Hell, even the president regularly calls for and promotes violence, so I don't think it's that much of a minority. The US was founded on it, after all.

reply
JumpCrisscross
2 hours ago
[-]
> Do you spend much time with people not in the tech world?

Most of it. Across the political spectrum.

> even if not to such an extreme

That’s precisely the point. There is a massive difference between doing or aiding and abetting such behavior, cheering it on, and giving into the impulse of “couldn’t have happened to a worse person” before self correcting. There are a few saints who reject the violence at first glance. But most people are in that self correcting phase, and the correction happens the more they learn about the specifics of the assault.

> even the president regularly calls for and promotes violence

To what numerical end?

reply
pesus
2 hours ago
[-]
> To what numerical end?

How many people live in Iran, who he just threatened to genocide? How many people hold views that Trump thinks make them his enemy, including being critical of ICE? How many immigrants are in the country? It's going to be a very large number, no matter how you slice it.

reply
JumpCrisscross
2 hours ago
[-]
I mean to what numerical end do Trump’s supporters pick up arms when he calls them to violence.
reply
pesus
2 hours ago
[-]
This is getting more and more specific - I was talking about him encouraging violence.

But some examples: Jan 6, the attack on Paul Pelosi, every ICE agent.

Personally, I've also received multiple death threats from his supporters.

reply
fzeroracer
2 hours ago
[-]
I mean, he literally just posted a video on his account of a woman being violently beaten to death with a hammer as a call for people to do something about immigrants.
reply
JumpCrisscross
2 hours ago
[-]
> he literally just posted a video on his account of a woman being violently beaten to death with a hammer as a call for people to do something about immigrants

Zero dispute. I’m challenging the notion that Americans are rising to that call. (Or cheering on specific attacks, versus general notions of violence.)

In a weird way, maybe social media helps in this one instance. We can’t let the enemy be faceless. There is no glory in shooting a specific mother or nurse.

reply
fzeroracer
1 hour ago
[-]
You're missing that the Americans rising to the call are employed by the state itself. ICE over Trump's tenure with a burgeoning budget has become filled with folks that were part of known white supremacist groups. The most violent believers have been state sanctioned and paid to inflict his agenda.
reply
kube-system
3 hours ago
[-]
What I think is different today is -- regardless of how many people organically think this way -- social media is normalizing the idea. We're all being exposed to it.

It's only a minority of people who are radicalized, but it's a growing minority. Radical ideas are more accessible than ever for people to latch on to.

Radical views on violence, social relations, science, politics, distrust of institutions, etc are all way more common than they were in the 90s.

reply
JumpCrisscross
2 hours ago
[-]
> but it's a growing minority

I’d want to see this interrogated with rigor. The alternate hypothesis, and my null, is a relatively fixed fraction of folks is more connected and visible today than before.

reply
2dfs
3 hours ago
[-]
I think youre misreading it entirely, doesnt surprise me given that you're a VC.

Here's one of the posts on that thread: "I mean one thing is to use AI or even ChatGPT as a product, and another is being aware of how billionaires treat the rest of the people

As for Sam, he also has pretty controversial views for how this whole thing will pan out and how he doesn't give a shit about the consequences it might have for the rest of us. Also more recently, the whole Pentagon contract thing"

People can both use LLMs whilst having a distasteful view of the leaders of the industry.

reply
JumpCrisscross
2 hours ago
[-]
> whilst having a distasteful view of the leaders of the industry

I have a tremendously distasteful view of a lot of Silicon Valley leadership. Doesn’t mean I want them to suffer at the hands of vigilante justice.

reply
MiguelX413
1 hour ago
[-]
Why not?
reply
newspaper1
3 hours ago
[-]
How about the 190 school girls the US murdered in the very first attack against Iran?
reply
JumpCrisscross
2 hours ago
[-]
Yeah, the number of people connecting a potential war crime in a military operation to Sam Altman’s San Francisco residence with violent intent are slim.
reply
newspaper1
2 hours ago
[-]
I’m not saying this was due to war crimes. I’m saying war crimes blew the Overton window for violence wide open.
reply
JumpCrisscross
2 hours ago
[-]
> war crimes blew the Overton window for violence wide open

I see no evidence of this. We didn’t see it after Iraq. And Luigi predates all this.

These aren’t organized political movements. They’re lone actors reaching breaking points. That don’t need a theory of violence, just access to guns and a day of mental instability.

reply
newspaper1
3 hours ago
[-]
After watching children literally be liquified in Gaza for two years, violence directed at Sam Altman doesn’t even move the needle. Our entire human rights framework what obliterated by Israel (with the blessing and support of the US and Europe).
reply
analog8374
1 hour ago
[-]
Does causing mass poverty count as violence? Because it's kind of like violence.
reply
DrProtic
3 hours ago
[-]
Maybe because people got used to violence being used against them?

All this violence against the innocent in various places and levels, and you think it’s weird that people are fine with violence used against a billionaire conman?

reply
therobots927
6 hours ago
[-]
It is scary. You know what’s also scary? Being told a robot is going to take your job and healthcare away.

There’s a lot of scary shit going on.

reply
happytoexplain
5 hours ago
[-]
Also scary: Seeing a comment this ostensibly un-controversial in grey.
reply
tptacek
3 hours ago
[-]
There's nothing "un-controversial" about trying to mitigate a firebombing attack with a broad critique of capitalism. It's an edgy take, just own it.
reply
pixel_popping
5 hours ago
[-]
I agree it is scary, but why would a robot take healthcare away? Wouldn't that be the contrary?
reply
WBrentWilliams
5 hours ago
[-]
The quickest way to rile up an existing mob is to make them fear their livelihood is being reduced or removed. The _robot_ is not taking away healthcare, but the effect of the robot existing hit directly at the livelihood of the masses.

In the US, health insurance is largely tied to employment. Health insurance, in a personal economic sense, reduces to being able to pay for healthcare. This policy is largely a left-over of World War II era employment policies. No one is taking healthcare _away_ from anyone (strictly speaking), but the ability to be able to _pay_ for healthcare is reduced to zero when employment ceases. Accessing the safety net is a separate skillset. This skill set becomes more difficult to achieve because the political class does not want to provide healthcare for everyone, only the worthy (their loyal voters).

I grew up in and am still a member of the precariat. I am educated and doing well, but I wear a well-polished pair of golden handcuffs due to how my ability to afford healthcare for myself, and my family, is tied to employment. Politically, I _do not_ like being tied to my employer by such a chain, but my arguments to change the system have been met with quite firm push-back.

reply
stvltvs
5 hours ago
[-]
Insurance companies are using AI (whatever that means in this case) to make coverage denial decisions. That can be reasonably summarized as robots are taking away our healthcare.
reply
whimblepop
3 hours ago
[-]
Link, please? I 100% believe this but I'm curious about the reporting by which you discovered this
reply
daveguy
3 hours ago
[-]
Google this and take your pick:

ai decisions health insurance

Also, to be clear, I don't think violence is the way to confront the oligarch sociopaths. There is clearly enough momentum to fix a lot of the monopoly / anti-consumer issues over the next 4-8 years. Assuming Trumpty Dumpty doesn't try to put our military at polling places or some other anti-democracy putinesque bullshit like that.

reply
ironman1478
5 hours ago
[-]
There are stories about insurance companies using AI when determining if a claim should be let through or denied.

https://www.palmbeachpost.com/story/news/healthcare/2026/03/...

reply
kube-system
3 hours ago
[-]
That is scary but the methods traditionally used to deny claims aren't really any better. I've had claims denied after they were explicitly pre-approved because of string literals not matching exactly.
reply
pesus
2 hours ago
[-]
It at the very least provides more cover to the ones denying the claims. They can blame it on AI in the hopes they're not the next one being targeted by vigilantes.
reply
ChoGGi
2 hours ago
[-]
My aunt worked for an insurance company while she was semi-retiring as a doc, she lasted a few months before she was too disgusted to continue.

AI isn't needed for insurance to fuck anyone over.

reply
whimblepop
5 hours ago
[-]
Because healthcare in the US is tied to employment. For most people here, losing a job means losing access to healthcare (partially or totally).
reply
cryptonym
5 hours ago
[-]
Because the robot would take their job and having a job is a precondition to healthcare (may vary by country)?
reply
anematode
2 hours ago
[-]
As far as I know, the US is the only country like this. But anti-AI sentiment is rising around the world.
reply
sophacles
5 hours ago
[-]
Well in the US you get healthcare from a job (either directly in the form of insurance or indirectly in the form the money to pay for healthcare). If the robot takes your job, it takes your healthcare too.

You know this, stop pretending otherwise.

reply
therobots927
5 hours ago
[-]
1. Americans need a job to get healthcare

2. Robots take away jobs from Americans and the proceeds to go the owner (investor) class

3. Americans no longer have healthcare

Understand?

reply
pixel_popping
5 hours ago
[-]
I understand (I'm not from the US), however, wouldn't healthcare in the US would get drastically cheaper (even eventually free?) if hospitals/clinics were composed of humanoids instead of humans?
reply
lazyasciiart
3 hours ago
[-]
That’s the logic Keynes used to suggest that we’d all be working 15 hour weeks by now, with computers doing all the work.

Needless to say, we have discovered that productivity gains are not consistently converted into reduced costs and work hours.

reply
WBrentWilliams
5 hours ago
[-]
Interesting idea. I cannot say that I can answer affirmatively nor negatively. There are also human elements to be considered. Humans are status-seeking social creatures. There will always be a stain of humanoid-delivered care, no matter how high-quality, as being not as high quality of all-human delivered care. This is, status accounts for a lot.

I can also draw pictures of how dangerous humanoid care can be, as there is a possibility in a break in the chain of responsibility. If a human medical professional messes up, you (or your survivors) can sue and seek damages directly, as well as sue the hospital and insurance system (with mixed results).

With humanoids? Currently, the bar is higher as the entity being sued is not the hospital, nor a person, or even a team. The only entities that can be addressed are the corporation the runs the hospital and the corporation that produced the humanoid. These two entities have an incredible out-sized advantage in terms of sheer delaying tactics, not to mention arbitration clauses and other legal innovations. Most injured will simply give up, which is a legal win for the two entities.

In my opinion, humanoid care will take a large amount of time, damage, and treasure to lower the costs. No actor will willingly give up their cash flow. My view may be too strong.

reply
threecheese
4 hours ago
[-]
This is definitely a potential future state, but not one I could imagine happening soon. Given that the robots which are currently deployed do not benefit people directly (and even the indirect benefit of lower costs or better investment returns appear to be captured by the upper tiers of the economy), we have no confidence that they would deployed to benefit anyone but their owners.

More likely near-term states are less rosy, given intelligence takes off.

reply
redsocksfan45
3 hours ago
[-]
Doctors are an incredibly powerful lobby in America and are massive beneficiaries of the status quo. Across America, doctors live in huge mcmansions in gated communities, even while medical bankruptcies cripple the working class in the same town. Oh but the administrators! It's not the doctors, it's the administrators... Who are more often than not also MDs.

This is to say, doctors protect their own professional interests and would never permit this.

reply
fatbird
4 hours ago
[-]
The price is set by how much providers can extract, not by their costs to provide. It's not at all obvious that a vast reduction in their cost of labour would translate to price reductions.

It's worth keeping in mind that in the U.S. the health marketplace is extremely complicated and cannot be analyzed with simple demand/supply graphs.

reply
GOD_Over_Djinn
2 hours ago
[-]
No, they wouldn’t get cheaper. The profit margins in the healthcare industry would get bigger.
reply
wak90
5 hours ago
[-]
Lol no
reply
misiti3780
5 hours ago
[-]
the narrative im hearing is AI breakthroughs will drive the cost of healthcare to zero (i.e. Alphafold etc)
reply
outside1234
3 hours ago
[-]
I don't condone it, but I understand the anger.

The billionaire class has enabled armed masked police in our streets, endless layoffs, basically don't pay taxes at any reasonable percentage, and basically have rigged politics with Citizens United.

Given that, I can see how people are resorting to 18th century French tactics.

reply
seanlinehan
2 hours ago
[-]
The top 1% of income earners pay 40% of all the federal taxes collected. The top 25% pay 89% of taxes.

Net of transfers, 60% of households receive more from government transfers than they pay in taxes.

The idea that rich people don't pay taxes is just not correct. The entire system is basically rich people subsidizing everybody else through byzantine distributional systems.

reply
lokar
2 hours ago
[-]
There is no ability to accumulate and hold wealth without a stable society. That means broad rights, democracy and limits to inequality.

Stop acting as if taxation if theft, it’s the fee that allows everything else to function.

reply
seanlinehan
2 hours ago
[-]
I didn't say any of that. Taxes are fine.
reply
lokar
2 hours ago
[-]
And also, the idea that highly progressive taxes (enough to limit inequality) is somehow unfair.

The primary role of the state is to protect private property, why not charge by value?

reply
seanlinehan
2 hours ago
[-]
Also didn't say that. You're arguing phantom arguments I very clearly didn't make.
reply
hn_acc1
2 hours ago
[-]
The top 1% also owns something like 70% of all the wealth, IIRC. The should be paying MORE than 40% of all the taxes.
reply
danny_codes
2 hours ago
[-]
GINI is still going up. That means we are getting less equal over time. The entire system is subsidized by the rich because nobody else has any money! By definition rich people have to pay.

If we have a pool of $100 and I take $99 and you get $1, and then I get taxed $5 and you get taxed $0, I still have almost everything. Is this.. unfair to me?

It's in fact the opposite of what you said: everyone else is subsidizing the rich, who have gamed the system to live extravagant lifestyles. Eventually this will lead to a revolution and all us rich people will be beheaded. It's the normal outcome of this sort of thing.

reply
outside1234
2 hours ago
[-]
I'm not talking about the top 1%. I'm talking about the top 0.01%.
reply
seanlinehan
2 hours ago
[-]
The top 0.01% still pay enormous taxes. Elon one year personally paid $11B in taxes.

I get that a lot of people think people's unrealized capital gains should be taxed, so maybe the argument you're making is something like:

"People with very large paper-gains based on appreciation of the market-value of the assets they own pay 0% taxes on those unrealized gains"

In which case, yeah, that's definitely true. But if they sell those assets, they pay taxes. Some of the taxes from those sales can be offset by doing things like donating enormous sums of money to charity. And sometimes people take loans against their equity, which is not a taxable event. Though, in order to pay those loans back, they have to sell something (taxable) or earn money elsewhere (also taxable). So loans are tax deferral...

But eventually the tax man comes for everybody.

reply
hilariously
1 hour ago
[-]
Buy assets (stocks, real estate, etc.) Hold them as they appreciate (no tax on unrealized gains) Borrow against them (loans are not taxable income) Die without ever selling

(you are wrong)

reply
watwut
2 hours ago
[-]
What is happening is that they are becomming richer and lower ranks are becomming poorer. Simply, they are so much richer that the little fraction they pay on taxes looks big.
reply
seanlinehan
2 hours ago
[-]
This perception that "lower ranks" are becoming poorer is just empirically not true.

On every metric, people in all income brackets are earning more on both a gross and COL-adjusted basis. It is the case that top quintile income has increased more than bottom quintile income, but a faster relative increase does not mean the other group is getting poorer.

The other very interesting thing is that there is statistically not really a "upper ranks" and "lower ranks". The majority of people in the 1% each year are there for the first (and often only) time. And a very, very small percentage of people in the bottom percentiles remain there for their whole life.

Some interesting research:

* 12% of the population will find themselves in the top 1% for at least one year

* Nearly 70% will spend at least one year in the top 20%

* More than half will have at least one year in the top 10%

* While 12% may reach the top 1% at some point, a mere 0.6% stay there for 10 consecutive years

All of that is to say, the idea that there are is some entrenched upper class waging war against some entrenched lower class is just empirically not true. If you dig through the data what you'll find is:

1. People who are just entering the workforce don't make a lot of money

2. As people spend time in the workforce, they make increasingly more money

3. When they retired, they start making less money but tend to have assets to live on

It's far more dynamic than most people's intuition leads them to believe.

reply
tikkabhuna
1 hour ago
[-]
Billionaires aren’t becoming billionaires from income. It’s increased stock valuations that create that level of wealth.

I constantly see posts focused on high earners already paying tons of tax. They do, but this should reinforce the point that the ultra wealthy should be paying more tax. People aren’t saying the guy on £500k should pay more, they’re saying the guy with £100m in assets should be.

reply
Analemma_
6 hours ago
[-]
Altman keeps on telling people he’s going to take away their jobs. He says that because it gets cred in tech circles, but in America this is an existential threat, not much different from telling someone “I’m going to break your kneecaps”. Of course some subset of people are going to respond with violence.

The sheer tone-deafness of AI marketing is going to come back to bite us very hard. This is probably just the beginning.

reply
2dfs
3 hours ago
[-]
Yep. Just wait until a large group of people (talking millions of people at once) lose their jobs. They will want someone to blame.

And I have no sympathy because this joker has been pushing people to the edge with his hyping.

reply
xienze
3 hours ago
[-]
Yeah part of me thinks the reason we know all their claims are bullshit is because you’d have to be pretty dense to think that you could promise eliminate >50% of jobs in many high value sectors within 12-18 months and _not_ expect to create more than a few people who’d have nothing to lose…
reply
gravisultra
3 hours ago
[-]
Here's the head of research at OpenAI saying "MORE. Don't stop." to the genocide of Palestinians. He still works there.

https://x.com/QudsNen/status/1806729161840476598

reply
outside1234
3 hours ago
[-]
There was a rumor going around Silicon Valley that if ICE came to San Francisco in force that Mark Zuckerberg's house was going to go up in flames in retaliation. You will be surprised to learn that the oligarchs talked to Trump and they did not come.
reply
cyanydeez
2 hours ago
[-]
uh, the president of the united states just threatened to nuke a country.

What kind of weird world are you living under...

reply
jlarocco
1 hour ago
[-]
I think we're going to see a lot more of it.

The job market's shit, it's nearly impossible for young people to buy houses or pay rent, well paying jobs are disappearing to AI, inflation is sky rocketing and people are getting desperate. But then we're told the economy's doing great and billionaires like Musk and Altman are rolling in money.

reply
GOD_Over_Djinn
3 hours ago
[-]
We can’t vote our way towards a better future. The corrupt MAGA and DNC institutions strangle any nascent grassroots movement in the crib. And we cannot make them relinquish their death grip on our country with only bare hands.

Seriously shocked that this is the aspect of this moment in history that you choose to focus on, and not the absurd levels of violence perpetrated by the ruling classes against common people.

reply
jmyeet
3 hours ago
[-]
I'm not saying throwing a MOlotov cocktail is ok. It's not. I think most people are analyzing the incident as being indicative of the times we're living in, particularly with the warehouse fire.

But where people are "OK with violence" is with state violence.

State violence include police violence (>1000 people are killed every year in the US by police), prison violence, violently rounding up immigrants and putting them in concentration camps, criminalizing homelessness, denying people life-saving medical care, evictions while landlords collude to raise rents, genocide, sending random people to a maximum security prison in a foreign country (ie CECOT), mass shootings, going with a firearm to a protest to instigate an incident and get a legal kill, intentionally creating the opiod crisis and so on.

For a large number of people some or all of these incidents will get a reaction somewhere between "thoughts and prayers" and "no, it's good actually".

Compare the state's reaction to one healthcare CEO being murdered and the perpetrators that are implicated in the Epstein files. Epstein himself was known to authorities since the 1990s and got an absolutely sweetheart deal in 2008.

So I'd say the real problem is what people view as violence and who's allowed to do it, seemingly without oversight or consequences of any kind most or all of the time.

reply
plorkyeran
5 hours ago
[-]
AI company marketing is pretty overwhelmingly "we're going to take away your job and leave to you starve on the streets". People concluding that the public face of this is their enemy who must be stopped is just a really unsurprising outcome.
reply
rvz
5 hours ago
[-]
That is what Ilya (and many other employees) (fore)saw.

They did not want a target painted on their backs or being involved with the company responsible for mass job displacement.

Let's hope that SF doesn't turn into a free-for-all after the IPOs, since the silliest thing is for everyone to move to SF and buy up the houses and then the have-not's realise who got rich.

I'd donate that money away or give the employees (who have nothing) a one-time bonus / raise like the five-guys owner [0] to not be a target.

[0] https://www.theguardian.com/us-news/2026/mar/27/five-guys-ce...

reply
lo_zamoyski
2 hours ago
[-]
It absolutely has. Both the Left and the Right have seared consciences and take no issue with murder and thuggishness as long as it's "their guy" doing it to "the other guy".

The world was never a wise and virtuous man's paradise, but it has been quickly sliding into ever increasing and monstrous irrationality. Give Plato's "Republic" a read and you might find it concerning how closely we exemplify the last stages of political and social decline.

reply
whalesalad
1 hour ago
[-]
I don't have a problem with violence, but I do take issue with the mass dismissal and outright hatred for AI by people who don't even understand what it is.
reply
0cf8612b2e1e
6 hours ago
[-]
One thing I have idly wondered is how much do the ultra rich protect themselves from theft or kidnapping. Is it just not a real concern?

If Taylor Swift owns a dozen homes, does she have full time security guards at each one? Or just accept some amount of burglary may occur? Do they go everywhere with a guard? Only to public events?

reply
bombcar
5 hours ago
[-]
It varies and they don't talk about it (obviously) but you can glean things from various sources. The more "public" the ultra rich are, the more they'll have security, especially noticeable security.

The silent or unknown ones will often still have something (usually a requirement of their or their company's insurance).

Once you graduate from "2, 3, 5 houses" to "mansions" you will have staff at each one, even if relatively bare-bones.

reply
2dfs
3 hours ago
[-]
Yeah but theyre useless if a large organised group shows up.
reply
randyrand
1 hour ago
[-]
No they’re not.
reply
snypher
1 hour ago
[-]
The Bling Ring were successful in their crimes for a little while and obviously Mr Security Team didn't stop them. They got caught via the oldest tale in time; a rat on the ship.
reply
sleepybrett
3 hours ago
[-]
hell they will probably join the mob instantly.
reply
keeda
1 hour ago
[-]
Once in a while we get to see concrete numbers for some of them, e.g. Meta spent $27M+ in one year on Zuck's security, which is way more than the other CEOs: https://fortune.com/2025/08/16/mark-zuckerberg-meta-security...
reply
strongpigeon
5 hours ago
[-]
I once knew a guy that used to be head of physical security for Bill Gates. He has body guards with him all the time and a sizable security team at his home in Medina. You wouldn't believe the amount of lunatics that show up at his home unannounced and claim he promised them money (or are a relative of him somehow).
reply
lamasery
3 hours ago
[-]
Well look they forwarded his email ten times as requested so it seems pretty clear that he does owe them money.
reply
sleepybrett
2 hours ago
[-]
i once did a little project for the home in medina, i never went on site but i did visit the office of his property management company. Dozens of people for managing the properties and on-site staff for each as well as, i think, bgc3 but not the b&mgf.

To hear tell from my coworkers that did go on site the security was insane, the media apparatus was insane (like a dvr for every channel running 24x7 so the family could call up whatever, wherever they were at any time). This is back in like 2010ish, before the marriage blew up.

reply
hnthrowaway0315
3 hours ago
[-]
For a start, they have bodyguards and rarely go into public without the right protection. They also went through a huge amount putting up security and cybersecurity (like I know one who sets up so many hops between endpoints that Microsoft banned his account). Even most of their employees don't know where they are and where they plan to be, unless they choose to do so. Ofc I guess there is always a way to probe, but people who do random killing rarely has the skills/mental to do that.
reply
ciupicri
5 hours ago
[-]
> accept some amount of burglary may occur?

From https://edition.cnn.com/2025/05/13/entertainment/kim-kardash...

> Kim Kardashian, testifying in the trial of the burglars accused of tying her up and robbing her at gunpoint nearly nine years ago, told a Paris court on Tuesday that she “absolutely thought” her assailants would kill her.

> “I have babies, I have to make it home, I have babies,” Kardashian recalled pleading with the armed men, who had broken into her hotel room while she slept during Paris Fashion Week in 2016.

> Facing her alleged attackers for the first time since the heist, the billionaire reality TV star detailed how she was robbed of nearly $10 million in cash and jewelry, including a $4 million engagement ring – gifted to her by her then-husband Kanye West – that was never recovered.

reply
bnwrt
1 hour ago
[-]
Sf Chronicle speaks of an "alleged attack", where a Molotov Cocktail was thrown at the outer gate. Looking at the picture there was zero chance of the house catching fire.

https://www.sfchronicle.com/crime/article/molotov-cocktail-c...

So the arrested suspect is either the wrong person, did not actually want to kill anyone or has no clue how fire spreads.

A strange incident that will make many people think of sending a noose to oneself (where oneself does not have to be Altman, but a pro-AI org who wants to generate sympathy).

reply
glitchc
2 hours ago
[-]
While reprehensible, are we certain this is not a false flag operation? It is apt to garner a great deal of sympathy in the right circles.
reply
34qjgh
35 minutes ago
[-]
It certainly is a highly convenient timing just before the Musk lawsuit against OpenAI.

Only minor damage was caused to a metal gate far from the building. Yet people here speak of "bombing Altman". So the sympathy works, and might work on the jury in that trial as well.

reply
wunderlotus
16 minutes ago
[-]
It’s plausible but gosh, I certainly hope not. That is such a sad world to live in.
reply
QuantumGood
2 hours ago
[-]
Definitely deplorable. If he is as many claim him to be, and the info about pushing hard to control the narrative recently is accurate, the timing certainly is suspect. But part of the point of a false flag is to make it hard to discuss whether it is a false flag, and ideally to see that it is never determined one way or another.
reply
richardfeynman
1 hour ago
[-]
You think Sam bombed himself? Is this what Hacker News has come to?
reply
DoneWithAllThat
1 hour ago
[-]
This is perhaps one of the stupidest things I’ve read this week.
reply
MontyCarloHall
6 hours ago
[-]
I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies. I've personally witnessed the worst Thanksgiving dinnertable fight I've ever seen (after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash), and a divorce (a very solid marriage between two people who were once both staunchly anti-AI unraveled within weeks after one of them changed their tune and adopted AI at work).
reply
lbarrow
6 hours ago
[-]
Spitting your food out because the AI generated the recipe is so clearly irrational that I chuckled a bit on reading that
reply
dirkc
6 hours ago
[-]
People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick?
reply
VectorLock
6 hours ago
[-]
Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.
reply
stvltvs
5 hours ago
[-]
A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.
reply
VectorLock
5 hours ago
[-]
As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)
reply
xmprt
2 hours ago
[-]
This is exactly what makes it dangerous. Food can taste ok but actually cause you to get sick. Not all bacteria is going to taste off. I'm assuming you're not a chef because if you were then you'd know how absurd your statement is.

For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it.

reply
psvv
3 hours ago
[-]
If meat was involved, the cooking time may have been unsafe if other precautions weren't taken by the cook (like checking the internal temperature).
reply
defen
5 hours ago
[-]
let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.
reply
kube-system
2 hours ago
[-]
People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all.
reply
daveguy
3 hours ago
[-]
Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility.
reply
tokioyoyo
1 hour ago
[-]
Wild takes in this thread. Copy and blog writing industry is just random fiverrs or hires from countries with cheap labour to pump up the SEO rankings.

Everyone grew up with an understanding to “never trust the random internet content for 100%”, now we’re trying to say that AI has to be 100% reliable.

reply
daveguy
26 minutes ago
[-]
Okay, captain pedantic. Clearly I'm assuming a known food blogger with a reputation at stake employed by bon appetite / food network / etc in this scenario. Not some random SEO spam.
reply
newZWhoDis
3 hours ago
[-]
>because the human writing the blog understands

Bold assumption

reply
strongpigeon
6 hours ago
[-]
Because it assumes the person actually making the food has no common sense?
reply
therouwboat
5 hours ago
[-]
We had billion dollar AI company install vending machine that was giving stuff away for free, so maybe AI users don't have common sense.
reply
bloody-crow
4 hours ago
[-]
This is an experiment they ran and were prepared to lose money on. It seems perfectly reasonable for an AI company to test their products in adversarial conditions to have a better understanding of its flaws and limitations.
reply
catlikesshrimp
3 hours ago
[-]
Fantastic history I hadn't heard, april fools day included

https://www.pcgamer.com/software/ai/anthropic-tasked-an-ai-w...

reply
wpm
5 hours ago
[-]
If they're asking an LLM for a recipe, they don't.
reply
pixel_popping
5 hours ago
[-]
My wife does it all the time, and it's actually decent.
reply
baggy_trough
2 hours ago
[-]
That's quite an assertion.
reply
bloody-crow
4 hours ago
[-]
That's just pure nonsense. My partner is very competent cook and she invents new recipes and experiments all the time. I don't see why she can't use LLM output as an inspiration to combine with her own expertise, sense of taste, and preferences to come up with an excellent dish.
reply
steve1977
5 hours ago
[-]
People get things wrong all the time as well, so I wouldn't trust them either.
reply
happytoexplain
5 hours ago
[-]
People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist.
reply
steve1977
5 hours ago
[-]
I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick.
reply
happytoexplain
5 hours ago
[-]
For sure. I'm not defending human perfection, I'm defending human caution (Disclaimer: The format of the preceding sentence was chosen without AI assistance).
reply
s1artibartfast
2 hours ago
[-]
Someone once try to feed me dinner from a recipie they found on the internet. I punched their lights out and then called the cops.
reply
mikestew
5 hours ago
[-]
Dunno about you, but I like the increased viscosity in my sauces when I use glue:

https://www.bbc.com/news/articles/cd11gzejgz4o

reply
ikkun
6 hours ago
[-]
I could see being concerned about food safety; I wouldn't trust an AI recipe to tell me how long/what temperature to cook chicken, and I might not trust someone who uses AI to generate recipes to know either.
reply
kbelder
2 hours ago
[-]
An appropriate response might be asking "Hey, I don't trust AI... what's the recipe?"

The described action seems performative and emotional, as it they were ideologically opposed to AI. Like spitting out food because it was prepared by a caste you found unclean.

reply
ctoth
5 hours ago
[-]
Hi! I love to cook! I also use AI to brainstorm recipes sometimes! Wanna try asking Claude, ChatGPT, Gemini, or even Grok what temperature chicken needs to be cooked to? I just asked Claude: 165°F (74°C) internal temperature.

Where does this come from?

reply
ikkun
5 hours ago
[-]
if you ask that question alone, AI is most likely to get it right, but the usual pitfalls of AI apply; they sometimes randomly get things wrong, people are more likely to miss wrong information when it's surrounded with correct information, and LLMs are specifically good at making text that seems correct on the surface. and in my experience, people often use AI specifically because they don't have a lot of knowledge in an area. if you do already know plenty about cooking, I'm sure using AI is probably fine, I just see it as a red flag.

cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing.

reply
miloignis
5 hours ago
[-]
Arguably, that's wrong - not because it's unsafe, but because it's not the best temperature for any part of the chicken I know of. I'm a big J. Kenji López-Alt and Serious Eats fan, and 165 is too hot for good chicken breast and too cool for good dark meat: https://www.seriouseats.com/chicken-thigh-temperature-techni...
reply
happytoexplain
5 hours ago
[-]
I can't tell if you're criticizing the parent or are innocently asking how Claude knows the temperature for chicken.

To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb.

reply
stvltvs
5 hours ago
[-]
Even if it can give the right answer when asked, will it necessarily account for that in a recipe it generates? A beginning cook may not know enough to ask.
reply
ahahahahah
2 hours ago
[-]
That's such pointless evidence.

Let's see what gemini says in response to a more realistic prompt: https://gemini.google.com/share/f0bcbe46c337

Well, look at that. 1.5 lbs of chicken breast in the oven @425 for 10 minutes, and a minute or two of broiling should do the trick.

Unlike all human-written recipes I found, it doesn't give the temperature to cook it to.

reply
s1artibartfast
2 hours ago
[-]
I a cook not paying attention or messing up and accurate recipe is overwhelmingly more likely.

IF someone is to the point of worrying about AI recipe risk for chicken, they should have already rejected any food made by amateur or professional cooks due to excessive risk.

reply
lbarrow
5 hours ago
[-]
Yea, I suppose that is fair regarding cook timings.
reply
pixel_popping
6 hours ago
[-]
but was it done with GPT-5.4 xhigh with an adversarial loop?
reply
racl101
2 hours ago
[-]
First thanksgiving dinner?
reply
layer8
5 hours ago
[-]
I interpret it as an expression of disgust. Similar to how people will stop reading and throw away a good book when they learn the author is a morally reprehensible person.
reply
wak90
5 hours ago
[-]
Like, I wouldn't spit the food out.

But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine.

Why are we outsourcing creativity and research and interest in discovery to an llm?

reply
ericd
19 minutes ago
[-]
Would you have disdain for someone who used a human travel agent to plan out an itinerary?
reply
thevinter
4 hours ago
[-]
Probably because the person wasn't interested in planning their vacation and wanted just to enjoy the end result?

Let's not assume different people find the same parts of the process enjoyable.

reply
s1artibartfast
2 hours ago
[-]
AI planned a european honeymoon for the wife and I and it was fantastic, one our the best vacations. I hate internet travel research. We told it our interests and gave it feedback.

I also discovered the best way to go to an art museum is to walk through with AI, taking pictures of each piece of art. It will tell you the historical context of its creation, a 1 page summary of the most facinating facts. It is like having a team of 100 art history professors in your pocket.

reply
bloody-crow
4 hours ago
[-]
Really don't get this take. I really hate vacation planning and would outsource this part in a heartbeat. My partner does this for me currently and she seems enjoy it quite a bit, but if she wasn't, the LLM-generated plans I've tried out of curiosity were equally as good.
reply
lostmsu
4 hours ago
[-]
> Why are we outsourcing creativity and research and interest in discovery to an llm?

This is also weird. I hate planning vacations, but I like going to them.

reply
dvfjsdhgfv
4 hours ago
[-]
Really? I can think of a few reasons I wouldn't trust AI-generated recipes.
reply
misiti3780
5 hours ago
[-]
lol = if you're against AI recipes, you have bigger problems.
reply
ajross
5 hours ago
[-]
The very fact that your takeaway from that story was "look at how dumb my enemies are" is why this is a conflict worth worrying about.

Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice.

reply
happytoexplain
5 hours ago
[-]
I mostly agree that it's an overreaction. However, "irrational" is a really bad choice of word. Every non-technical person understands that sometimes AI says wrong things - like, random, crazy wrong things, not just a little off. It's just a general rule kept in the back of the mind. Food is easily in that realm of "be careful". Did the AI produce a recipe that would be harmful to you and the cook didn't notice? Almost certainly not. So, sure, they were being over-cautious. But "irrational"? No, no, no. It's definitely rational.

Look at what you're writing.

"Doing X is so clearly irrational that I chuckled a bit."

Please don't perpetuate the image of the elitist techie. That is what was just firebombed.

reply
s1artibartfast
2 hours ago
[-]
there is almost nothing seriously dangerous about food, particularly everyday food.There are a handful of niche things that are seriously dangerous, like cooking Fugu or Poison mushrooms with special preperation.

I think this says more about how neurotic and paranoid people are.

reply
tptacek
3 hours ago
[-]
I operate in at least one social circle that is heavily not-technical (local politics) and I do not see this at all.
reply
AlexCoventry
2 hours ago
[-]
The hatred is particularly intense on reddit. I lost a couple of accounts there to suspension, just for speaking a civil way about the positive aspects of AI.
reply
kube-system
2 hours ago
[-]
My experience is somewhat in the middle -- I see educated non-technical people who are strongly against AI because they see it as polluting, "wasting water", and harmful to society. Although many use it anyway.

I could totally believe uneducated or less well-adjusted people reacting in the above way, though.

reply
_aavaa_
1 hour ago
[-]
Non-technical indeed. The wasting water or pollution argument is getting really tiring.
reply
digdugdirk
1 hour ago
[-]
I would be careful about this one. While the overall impact (in the global/national aggregate sense) may not be massive, the impact to individual communities nearby these new hyperscale datacenters is far more impactful than most people on this site might think.

Look at the grok datacenter in Memphis for one example. The "move fast and break things" mentality in this arena isn't about code anymore, it's being applied to communities.

reply
_aavaa_
1 hour ago
[-]
Let’s say we grant the grok example.

A) How many other datacenters with similar problems can you name?

B) How does this industry compare to every other one on earth, and then look at the disproportionate hate this gets compared to other industries that are substantially worse.

reply
throwanem
57 minutes ago
[-]
What do you see?
reply
pesus
2 hours ago
[-]
People in politics aren't that dissimilar to tech bros (especially AI ones) in terms of world view.
reply
tptacek
2 hours ago
[-]
People in "local politics" are random neighbors, almost none of whom are "in politics" in the colloquial sense.
reply
pesus
2 hours ago
[-]
Fair enough, but I still think it at least somewhat applies to people who are willing to get involved in any kind of political process beyond the very basics or perhaps some special interest groups.
reply
tptacek
2 hours ago
[-]
They are nothing remotely like "tech bros", is my point.
reply
TehCorwiz
6 hours ago
[-]
Well, Sam Altman and Jensen Huang are going around bragging about how many people they're going to push out of employment. Might have something to do with it.
reply
wunderlotus
14 minutes ago
[-]
> going around bragging about how many people they're going to push out of employment.

When have they bragged about this?

reply
NickC25
2 hours ago
[-]
This.

Sam's got 3 billion net worth.

Jensen's got 165 billion to his name.

They are giddy about taking jobs away, and both are engaged in "tax reduction strategies" and suck up to Donald Trump.

You wonder why people are pissed?

reply
bloody-crow
1 hour ago
[-]
> They are giddy about taking jobs away

This is just your interpretation. My interpretation is that they talk about computers being able to perform some intellectual tasks that are now handled by humans in a more efficient and fast manner. They're excited about technological progress and new opportunities it provides, not being "giddy" about unemployment and economic uncertainty.

> suck up to Donald Trump

When you're a head of multi-billion dollar company that employ thousands of people and respond to board that expects your company to continue growing and make money for them, it's strategically dumb and irresponsible to NOT suck up to the most childish and vindictive person who has real power to screw over you and all the people you employ who expect you do everything in your power to prevent this.

If you don't do that and Trump fucks your company over as a result, you're just bad at your job as a CEO.

reply
tokioyoyo
55 minutes ago
[-]
Where is everyone getting their information from in this thread? It’s like everyone is talking past each other.

I think AI is net-good. But every frontier lab founder has said, paraphrasing, “Things might go horribly wrong, a lot of people might lose jobs, or maybe we’ll have even better economy, and people will prosper. We can’t operate based on the former, because if our adversaries out-invent us, we’re screwed.”. Like all AI-adjacent companies talk about long-term savings, increasing productivity, needing less people to do the same jobs and etc. Obviously fully employed people, especially the ones with things to lose don’t want it?

Also, this is not a uniquely American thing. People in China are going through the same stuff.

reply
layer8
5 hours ago
[-]
From a recent NBC News poll, “the only topics that were less popular than AI were the Democratic Party and Iran”: https://www.nbcnews.com/politics/politics-news/poll-majority...
reply
snielson
5 hours ago
[-]
My wife runs a food blog and sometimes uses AI to come up with recipes she tests on us first. One of the best dishes she’s ever made (and one of the best I’ve ever eaten) was pork with an apricot sauce. The pork was fine, but the sauce was absolutely incredible! I’d put it on any kind of meat. Funny thing is, I don’t even like apricots, but the sauce was amazing. My wife does have one advantage, which is that she knows when the AI has hallucinated something crazy and makes appropriate adjustments. I guess it's like anything. AI can be a big help to those who already have a threshold level of background knowledge in a field but can cause big problems for those who don't.
reply
layer8
5 hours ago
[-]
You can’t write something like this and not share the recipe.
reply
happytoexplain
6 hours ago
[-]
There is very strong anti-AI sentiment among "techies" too. It's just not absolute or generalized (AI is a huge umbrella term).
reply
metalliqaz
5 hours ago
[-]
You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.

People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.

reply
whimblepop
4 hours ago
[-]
> I both use AI and have very strong anti-AI sentiment.

Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.

At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.

And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.

I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.

reply
slopinthebag
5 hours ago
[-]
I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.
reply
linkage
6 hours ago
[-]
Politics really is a substitute for religion in America
reply
kelnos
6 hours ago
[-]
In secular America at least. Most people in the US are religious, many of them fervently so.

And quite a few of them like to mix their religion with politics.

reply
racl101
2 hours ago
[-]
> And quite a few of them like to mix their religion with politics.

The two things they told us not to talk about at the dinner table in order to have a better experience.

Maybe it was solid advice after all.

reply
elephanlemon
5 hours ago
[-]
Frankly I think a lot of these people are politics first. How else do you explain the dissonance between Jesus’s teachings and their political opinions?
reply
MiguelX413
4 hours ago
[-]
Their politics are perfectly in line with their Christian-themed cult.
reply
lazyasciiart
3 hours ago
[-]
Yes but when they’re not, they choose politics. See: Catholics right now.
reply
misiti3780
5 hours ago
[-]
this is true, but thankfully, religion is declining in America. although if people are replacing it with politics, maybe we need another revival
reply
leosanchez
6 hours ago
[-]
Religious people can be anti-AI too.
reply
MontyCarloHall
6 hours ago
[-]
Indeed, but the rage I've seen during political fights at family gatherings (and another politics-induced divorce) pales in comparison to the rage I saw in these two anecdotes. The worst political debates I've seen involved raised voices and some name calling, not spitting food and smashing plates. The only other political divorce I've seen slowly simmered over a few years after Trump was first elected, not in a literal matter of weeks.
reply
Kon5ole
5 hours ago
[-]
The remarkable part of your anecdote is the behavior. Seems to me some humans nowadays are less tolerant of any difference in opinion, AI is just the current reason to pick a fight.

Wonder why that is, and if we'll grow out of it peacefully.

reply
lazyasciiart
3 hours ago
[-]
It’ll quiet down once we make it illegal and/or justification to be committed to an asylum to have opinions we don’t like - the way it was in the old, tolerant days.
reply
bloody-crow
4 hours ago
[-]
Nowadays? It's always been the case, the only thing that changed is the subject.
reply
Kon5ole
2 hours ago
[-]
I think it's gotten way, way worse over the past 20 or so years. I recall having friends spanning several political parties, countries and religions hanging out with barely a sense of tension in the room.
reply
rishabhaiover
5 hours ago
[-]
This was obviously a fictional thanksgiving dinner. Nobody is this geezed up about AI assistance.
reply
TripleTree
5 hours ago
[-]
I would absolutely stop eating a meal if I learned AI was involved in creating it. I suppose I wouldn't literally spit it out but I wouldn't take another bite.
reply
swader999
2 hours ago
[-]
Really? It's just a better way to search for recipes in Mr experience
reply
s1artibartfast
2 hours ago
[-]
Why? What if you found out a human was involved in creating it?
reply
stvltvs
5 hours ago
[-]
Nobody in your circle of friends/acquaintances perhaps.
reply
rishabhaiover
4 hours ago
[-]
You're okay with sitting at the rear seat of a car while it drives you around the city though.
reply
racl101
2 hours ago
[-]
Ironically I have noticed it's techies and white collar workers who fear and/or loathe AI the most. Why? Cause they're the most likely whose jobs have been threatened by it or have already been superseded by it.

My blue collar work buddies don't feel as strongly or as existential about it. To them, it's just this buzzwordy crap that has ruined entertainment or made the quality of services even worse. It's more of an annoyance than an outright fear and/or loathing of it.

Maybe if the bubble pops and the economy tanks and it affects their bottom line they might hate it as much as the aforementioned people.

reply
LooseMarmoset
5 hours ago
[-]
From my own perspective, the "visceral hatred" isn't so much at AI (which I use almost exclusively to generate funny pictures of myself and coworkers) but at the executives that view it as a way to enshittify society.

turning myself (an overweight bearded guy) into an animated hula dancer and turning my coworker into the Terminator and sinking into molten steel don't seem to inspire the same hatred. unless you don't like hula dancers.

reply
gamblor956
1 hour ago
[-]
I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies.

I work in a non-tech industry and I see this all the time from people, but it's not just limited to AI. SV itself evokes hatred in a lot of people on both sides of the spectrum.

I can't repeat the worst things I've heard, but Altman and his ilk should be terrified of the mob violence they're instigating.

reply
newZWhoDis
3 hours ago
[-]
Portland?
reply
kbelder
2 hours ago
[-]
That is really funny.
reply
sillyfluke
5 hours ago
[-]
I must live in the upside down. If there are any ardent anti-AI people I come across they're techies. Whereas non-techies are either oblivious or completely and comically locked-in as caricatured in that South Park episode.
reply
yfw
2 hours ago
[-]
The only thing we hear is your jobs are going to be gone but we are still only giving you healthcare if you work.
reply
alfalfasprout
6 hours ago
[-]
It's quite prevalent in tech too-- however, folks tend to be quiet because the "use AI for everything or else" hammer is being used across the industry.
reply
hnthrowaway0315
3 hours ago
[-]
TBH people in AI may also resent AI, because they are the first to be impacted by AI. They just don't say openly because frankly no one wants to lose his/her job.
reply
watwut
2 hours ago
[-]
If they divorced in few weeks, there is zero chance it was solid before ai disagreement. They were distancing themselves emotionally long before.
reply
whateveracct
1 hour ago
[-]
wow people really are getting psychosis from AI (discourse)
reply
lexandstuff
6 hours ago
[-]
I've found that most non-tech people are indifferent or, at worst, utterly bored by any mention of AI.

The tech people are the ones that have the strongest opinions one way or the other.

reply
kbelder
2 hours ago
[-]
That is my experience, as well.
reply
nothinkjustai
5 hours ago
[-]
Not just non-techies. Plenty of techies share that same visceral hatred. Some of them even use these tools themselves, because it’s a complicated issue with nuances.
reply
lamasery
3 hours ago
[-]
Yep, all of us with a clue are keeping our traps shut at work, or even boosting it or slapping it onto projects that don't need it, because this is clearly one of those things where attempting to offer counsel and advice that's contrary to the way the MBA winds are blowing can only hurt your career.
reply
nothinkjustai
42 minutes ago
[-]
You literally get praised for slopping out as much code as fast as you can, so why not? Makes your boss happy and gives you job security cuz eventually that shit will be completely unmanageable with or without LLMs. Gotta hit those KPIs!
reply
satvikpendem
2 hours ago
[-]
I think you're just in a strange bubble of people because those are absolutely comical responses to learning of AI. I do know some people who are for or anti AI to a stronger extent, but most of those I know simply don't give a shit, they'll use AI if it's there, such as for their job or to ask an LLM questions, but otherwise not think about it.
reply
therobots927
6 hours ago
[-]
Most SV people live in a bubble inside of a bubble. They don’t understand how their words come across to a significant portion of the population. If they did they would shut the fuck up.
reply
lexarflash8g
2 hours ago
[-]
Silicon Valley just means a concentration of influential and powerful actors including venture capitalists, executives, entrepreneurs who decide the fate of the technology industry. Basically the elites of the XYZ sector.

Same can be applied for DC (politics/military), NY (finance), LA (entertainment).

SV was around since the 60-90s but didn't get much attention until beginning of this millenium due to the huge value creation and control they had over the US economy. They just happen to be relevant in recent times. 100s of years ago it was the railroad and oil conglomerates, 1000s of years ago kings and feudal lords. So there is always a powerful influential class who controls the strings -- its a feature of human society.

reply
baal80spam
5 hours ago
[-]
Not sure why you were downvoted so heavily. SV is a bubble if I've seen one.
reply
throwanem
6 hours ago
[-]
Surely there must have been underlying tensions in that marriage.

(I don't feel at all confident in that statement; I am requesting reassurance.)

reply
MontyCarloHall
6 hours ago
[-]
They are pretty good friends of mine and I never sensed any tension. It really was a marriage-ending bolt out of the blue, like discovering an affair or severe financial infidelity.
reply
satvikpendem
2 hours ago
[-]
As an outsider you wouldn't know though.
reply
throwanem
5 hours ago
[-]
I don't really want to say "thank you." That story, more to the point that I can't find a priori cause to doubt it, makes me glad I'm about to go enjoy a gorgeous spring afternoon full of birdsong and sunshine. But I appreciate your taking the time to follow up.
reply
gopher_space
3 hours ago
[-]
I mean the simplest way to look at this is that he's just wrong about the couple being happy.
reply
throwanem
3 hours ago
[-]
I was married for a decade. Little of that was happy. (We both made the mistake of marrying each other, then compounded it by both being afraid to be first to admit to having noticed.)

Everyone noticed - and of course I've seen it from the other side, too, many times. You can't hide when people are together who don't want to be. That always shows.

reply
lazyasciiart
3 hours ago
[-]
This is like saying that of course people could tell Ted Bundy was a psychopath, it always shows.
reply
throwanem
2 hours ago
[-]
One might insightfully argue the whole point of the psychopath is precisely that it doesn't show. I recommend Cleckley, whose definition is seminal in The Mask of Sanity, [1] originally 1941 but prefer his 1988 fifth edition especially for its rather disconsolate preface. But even a cursory review of either will trivially show the comparison does not hold.

[1] https://gwern.net/doc/psychology/personality/psychopathy/194... - despite the filename, this is the 1988 edition. I like my paper edition (I made my paper edition) but the PDF will serve well enough for your reference here.

reply
rvz
5 hours ago
[-]
Crypto doesn't get that much hatred, since you don't need to participate in the space even in non-techies circles. But it doesn't affect them and it can be safely ignored in its own bubble.

Mentioning "AI" in non-techies circles is a bad idea. It tells you that many here are in a massive bubble and unaware of the visceral hate against AI because it directly affects them and they cannot opt-out.

Given that AI takes more than it gives back (jobs, energy, water, houses) of course you will get anti-AI activists.

reply
layer8
5 hours ago
[-]
Except when you’re the victim of ransomware that extorts you to pay some bitcoin. But it seems that fewer people have encountered that than having AI forced upon them.
reply
littlestymaar
6 hours ago
[-]
> after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash

Not entirely unwarranted given the track record of LLMs as a chef though:

https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...

https://www.bbc.com/news/articles/cd11gzejgz4o

Of course it was two years ago and it's unlikely to happen again, but that's the drawback of the “move fast and break things” attitude: sometimes you've broken public perception and it's hard to fix afterwards.

reply
mandeepj
5 hours ago
[-]
> a couple people literally spat out the food they were enjoying and threw their plates in the trash

That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.

reply
jorgonda
6 hours ago
[-]
Putting millions of people out of work comes with consequences. We are going to see more and more of this.
reply
randyrand
59 minutes ago
[-]
Which jobs? Most of that is still AI hype.
reply
strange_quark
12 minutes ago
[-]
I agree with you, it’s mostly hype. But it doesn’t really matter whether it’s true or not because the vibes are clearly bad. These execs keep “warning” us about that AI will take all the jobs then keep pouring more money into AI, the press credulously reports it, and people are obviously worried. Most people aren’t digging through economic data themselves to figure out these execs are full of shit.
reply
lcnPylGDnU4H9OF
14 minutes ago
[-]
The jobs that the hype is referring to. Any lack of veracity is moot to whether or not it's convincing, else there would be no reason for it.
reply
GlibMonkeyDeath
4 hours ago
[-]
reply
surround
1 hour ago
[-]
It seems Sam Altman has the same suspicion, based upon his response:

> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.

https://blog.samaltman.com/2279512

https://news.ycombinator.com/item?id=47724921

reply
anematode
1 hour ago
[-]
Unlikely – no one is starting off undecided, then reading one article in The New Yorker and then committing this. And it's a slippery slope to tie it to legitimate criticism.
reply
tedd4u
3 hours ago
[-]
Trigger warning: AI animation of uncanny-valley Sam Altman "hydra"
reply
networkOne
2 hours ago
[-]
I bet the molotov cocktail was made via instructions from ChatGPT.
reply
niemandhier
3 hours ago
[-]
"Respice post te! Hominem te esse memento!"
reply
ChoGGi
2 hours ago
[-]
When you constantly preach on how your company is able to save money by taking away jobs, I have no fucking sympathy for you.
reply
rambrrest
6 hours ago
[-]
This will only get worse imo - regardless of how Sam is perceived - there is anger against AI which is growing amongst the people. I think we as a society need to stop and have the conversation and be more thoughtful about how we integrate AI with everything.
reply
pixel_popping
5 hours ago
[-]
I don't think this is possible yet, because many people refuse to think AI would be eventually better than us at practically anything (at least anything virtual), they keep talking about what's "current" while I think it's completely irrelevant for that discussion, people need to assume extreme intelligence and orchestration tools (and robots) will be there, worldwide, it's a *fact*, not just a maybe.
reply
toraway
3 hours ago
[-]
It is actually entirely possible to discuss a solution for something that may or may not happen. If a hurricane is approaching, we don't typically require every person to agree the odds of landfall are 100% to start preparing shelters and stockpiling aid nearby. Not everything in the world is about the "AI skeptics" on the internet being dumb and wrong unlike you.
reply
classified
4 hours ago
[-]
Your "fact" is pure vaporware and hallucination.
reply
pixel_popping
4 hours ago
[-]
Let's talk about it again in 5 years, but 1-2 years from now, at the very least, coding will be over in the sense that the best models will do it better than the best (or the 99.99%). I don't think I'm hallucinating no, when my own work went from coding+managing+bunch of other stuff to just orchestrating and my output is just insanely higher and I literally have a bunch of friends that went from coding 8h a day to just "pretending to code" and just using a bunch of agents and get paid the same salary for working 30min a day, that's real, not an hallucination.
reply
shash
2 hours ago
[-]
How are you/they instructing those agents? If you are writing detailed spec.md and reviewing those results, you are _still_ programming. Just in pseudocode effectively. I’ve seen enough session transcripts and detailed prompts that would have been easier to just write the code instead!
reply
classified
4 hours ago
[-]
> in 5 years

That's literally the same argument that the blockchain gurus made, and each following year it was still 5 years in the future. I'm getting strong Real Soon Now™ vibes.

reply
AlexCoventry
39 minutes ago
[-]
No, AI has real, immediate, large-scale industrial and scientific applications. Cryptocurrency is a massive disappointment, but AI has genuine potential value.
reply
cleversomething
3 hours ago
[-]
Bitcoin was never actually valuable for the average person except if they got lucky by timing the speculation bubbles right, or if they were buying illegal drugs online.

Lots of AI tools already add actual value and they're only getting better. Every software dev I know uses Claude at some level. Whether it will be the next trillion dollar unicorn might be overhype, but in terms of demonstrating its general utility, it's already there. No need to wait 5 years.

reply
pixel_popping
3 hours ago
[-]
It's really 2 very different things, only the "shilling" might be deja-vu.
reply
sleepybrett
2 hours ago
[-]
> Every software dev I know uses Claude at some level.

here is my leve:, try to get claude to properly perform some boring boilerplate for me until the tokens run out or i get super angry and then do it myself in a rage.

reply
cleversomething
2 hours ago
[-]
what stack do you work in, and how are you prompting it?
reply
pixel_popping
4 hours ago
[-]
common, that's very different, that's something current with practical use-cases that are already being implemented across all companies, I don't even know why we compare this with blockchain, blockchain is just some fancy resilient DB with proofs in the end.
reply
sleepybrett
2 hours ago
[-]
or elson and fsd.
reply
therobots927
6 hours ago
[-]
Think occupy Wall Street but cranked up significantly.

That’s what’s coming. Like it or not.

reply
linkage
5 hours ago
[-]
I hope "cranked up" was a pun
reply
dmitrygr
1 hour ago
[-]
No surprise given that a full quarter of these on one side of the political spectrum consider political violence acceptable (~25%. Same figure is 9% for self-identified moderates, 3% for the other side).

Source: https://rb.gy/wdzmsc (YouGov poll, n=2,646, date = sep 10, 2025, question = "Do you think it is ever justified for citizens to resort to violence in order to achieve political goals? (%)", raw data linked under poll graph, downloadable )

reply
lcnPylGDnU4H9OF
15 seconds ago
[-]
One just needs to observe current events to understand that a large portion of ahem "the other side" either fail to understand what "political violence" is, or are simply lying about whether it can ever be justified.

It's currently in vogue for "the other side" to decry political violence; of course they will say it is not justified in a 2025 poll. The only problem is that it is blatantly contradicted by the political violence which is perpetrated and excused by "the other side".

reply
randyrand
51 minutes ago
[-]
“ever”? So we’re not limited to reality or likely scenarios? Just anything I make up as if I was writing a creative fiction novel?

I’m not sure how much this question actually measures propensity towards violence, than it does creativity and imagination.

reply
dmitrygr
33 minutes ago
[-]
Yes political violence is NEVER justifiable. That is the one and only correct answer if you wish to live in a modern society. And believe me, you do not want to live in the other kind
reply
randyrand
29 minutes ago
[-]
The question is not limited to modern society! Or even humanity! Just “ever”. I mean what does the universe even look like in hundreds of millions of years? Humans won’t even be around.
reply
dmitrygr
26 minutes ago
[-]
I am not going to play pretend with you. Sorry that you dislike what you see so much as to nitpick to such a microscopic level, such is the data.
reply
randyrand
24 minutes ago
[-]
Then I think you misunderstood the question! I’d expect the HN crowd / programmers to know the importance of limiting scope.
reply
dmitrygr
20 minutes ago
[-]
I can suggest some good statistics books, which will happily explain why questions are purposefully worded this way.
reply
fredgrott
6 hours ago
[-]
how to tell its not AI or AGI..it throws a Molotov cocktail...
reply
pixel_popping
6 hours ago
[-]
Yeah, Unitrees wouldn't aim that good.
reply
SilentM68
5 hours ago
[-]
Hmm, that's troubling but predictable.

The idea that AI will bring an age of abundance may be true, but not in the short term. Companies are letting people go, and AI will be blamed for that, whether true or not. For decades the public perception that most Tech Bros have prioritized profits over the wellbeing of the little guy is well established, in my view, in some cases well deserved with no accountability.

It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

Tech Bros can avoid this by modifying their priorities, prioritize employee rights and lobbying governments to begin implementing some sort of Universal Basic Income of some sort and or provide the means by which people can survive, or the government may start marketing Soylent Green to consumers :(

reply
pesus
2 hours ago
[-]
I'd say an important distinction is AI is currently threatening to displace a significantly larger number of jobs across multiple sectors. Whether it can/will actually happen is yet to be seen, but the potential amount of scorned people with nothing to lose is far greater this time.
reply
SilentM68
1 hour ago
[-]
I agree. It's hard to know, but if it were me that's creating this type of economy, an uncertain economy which may/may not bring about all these changes, I'd modify the approach to account for any potential human displacement, just in case it does come about.

The outlook from my view, though limited, is so pessimistic that I just keep thinking about a scenario similar to, though not AI-related, Soylent Green or Great Ravine from The Dark Forest, Volume 2 of Liu Cixin's Remembrance of Earth's Past trilogy (aka The Three-Body Problem series).

I keep hoping I come across a 1973 Ford Falcon XB GT Coupe aka "V8 Interceptor" or "Pursuit Special," when all hell breaks loose :|

reply
whimblepop
5 hours ago
[-]
> It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

It's worth remembering that the way that ended was extremely bloody, particularly for the Luddites themselves. There were a handful of extreme participants, there was a murder, and there was a hell of a lot of violence directed at anyone perceived as a Luddite— even though most actual Luddites themselves mostly avoided violence against other humans.

It would be good if we can somehow avoid such outcomes this time.

reply
SilentM68
4 hours ago
[-]
Greed drives most of the current crop of Tech Bros.

I once had the chance to be a Bro, far richer than any of the current ones, thanks to the still secretive and anonymous "original-sn-adjacent cryptographic collective". Things, however, did not work out in my favor thanks to other nefarious third-party actors. So, I know where from I speak.

Any outcome is in the hands of the Tech Bros but by the looks of it, greed drives their every action, so things are not looking good!

:(

reply
MiguelX413
1 hour ago
[-]
I'm a neo-luddite
reply
Teever
4 hours ago
[-]
I'm going to be blunt about this.

We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.

I predict that we'll see a movement start where people who get diagnosed with a fast acting terminal illness that gives them a few weeks to months of relatively high functionality followed by a quick downward decline -- like say a brain tumour decide to kamikaze against the people they feel have wronged them and their kin gravely.

People will use something like this[0] to evade detection but won't really give a shit if they get caught because they'll be dead in a few months.

Even if they don't have access to such technology they can always just use a firearm like we've seen people try on Trump and Charlie Kirk and that Healthcare CEO guy with relative success.

I'm amazed that Peter Thiel is giving talks about the antichrist at the Vatican. I've seen relatively recent videos of him walking down the street with only a security guard or two[1], and they seem completely unprepared for any sort of attack on them from someone with a firearm or a drone.

It's like these people genuinely don't understand how destructive their actions are viewed as by society and the bubbling resentment and rage that is growing towards them.

I'm not sure what the defense against such a movement is. I guess maybe fixing wealth inequality and giving people at least the impression of greater participation in our democratic system?

This[2] is the vibe right now and it's only growing stronger by the day.

[0] https://www.youtube.com/watch?v=qrZ1aH5gtMU

[1] https://www.youtube.com/shorts/pGHIplhJ8Ek

[2] https://genius.com/25966434

reply
stevenwoo
3 hours ago
[-]
Ministry of the Future beat you to the punch with victims of human driven climate change shooting down thousands of private planes with drones as protest.
reply
elsonrodriguez
2 hours ago
[-]
One of the biggest fantasies in that book is that the "protesters" would be so unified and ethical in their plots.

In real life the attacks in response to climate change(and in this case, economic injustice), will be committed by such an uncountable plurality of groups that the violence will seem almost capricious.

reply
camillomiller
3 hours ago
[-]
>> We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.

oh nooooo. anyway

reply
supliminal
2 hours ago
[-]
YC S26 here. We are actually working on something like this right now. Contact info in bio.
reply
sleepybrett
2 hours ago
[-]
We can only hope that when they reveal the identity of this guy he happens to have a name that overlaps the mario bros. universe.
reply
EGreg
6 hours ago
[-]
I've been saying for years on here...

to the people on HN who are against blockchain but bullish on AI

With blockchain and smart contracts or stupid even memecoins, you can only lose what you voluntarily put in. You had to jump through a few hoops, then maybe you got rugpulled, maybe you became a millionaire.

With AI, regardless of whether you consented or not, you can lose your job, gradually your relationships and sense of purpose. And if some malicious actors want to weaponize it against you, you can lose your reputation, your freedom, get hacked at scale, and much more. The sooner we give biolabs to everyone the sooner someone can create an advanced persistent threat virus online infecting every openclaw machine, or a designer virus with an incubation period of half a year.

And I know what someone on here will always say. There will always be a comment to the effect of "this has always existed, AI is nothing new". But quantity has a quality all its own. Enjoy your AI slop internet dark forest. Until you don't.

reply
Centigonal
3 hours ago
[-]
Is your definition of bullish "believes the technology will be widely adopted across society and accrue significant wealth to its owners?" - if so, I think it's very clear how someone could be bullish on AI and not blockchain. You don't have to like AI to see it as an inexorable transformer (ha!) of society and wealth.

Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.

reply
nickvec
6 hours ago
[-]
https://archive.ph/aoXIY

@dang didn't see this post before posting the archive.ph link at https://news.ycombinator.com/item?id=47722344 - feel free to delete/merge that thread with this one

reply
rvz
6 hours ago
[-]
The problem here is that there are no viable solutions to what happens when AI eventually replaces (yes replaces) tens of millions of humans in white collar roles.

All that is being "promised" are vague claims of "abundance". But all I see is this:

"AGI" is going to bring abundance of lots of very angry people and UBI to no-one (because it can never work at a large sustainable scale).

Some people are starting to realise that "AGI" was a grift and a scam and they are not happy about this lie and the insiders knew that and increased spending on security and private bodyguards.

reply
operatingthetan
6 hours ago
[-]
I don't think the LLM will produce AGI. Just based on how context windows work, the prompt cycle, etc. LLMs aren't out there thinking about stuff in their spare time. The way they appear to have thoughts and a psyche is purely an illusion.
reply
fooqux
6 hours ago
[-]
Something I often think about is how we can barely define what AGI, consciousness, etc are. We may be pretty sure that what we have currently is an illusion, but at which point is the illusion good enough that it no longer matters? Especially with regards to my first question.

It's hard to say it's not X when we can't really define X.

reply
ethanrutherford
5 hours ago
[-]
I would personally argue that it's a lot easier to say something definitely isn't x, with confidence, than to say it definitely is. I definitely don't know what the surface of jupiter looks like, but I can pretty confidently say it doesn't look like Kansas. I think the better it gets, the easier it will be to spot the shortcomings, because the gap between what it can do well and what it can't will widen. Anything the technology is fundamentally incapable of ever achieving will be made obvious by the fact that it will simply continue to not achieve it. We may not be able to easily define the totality of what exactly it needs to have to count as AGI, but the further it progresses, the easier it will be to point out individual things it's definitely missing.
reply
operatingthetan
5 hours ago
[-]
I'm not saying we can't build it, but what we have right now certainly is not it. Right now context is just a bunch of text. Surely the human mind's context resembles something more like a graph database. What if we could use a database for context?
reply
andsoitis
5 hours ago
[-]
> LLMs aren't out there thinking about stuff in their spare time.

Agentic changes the calculus.

reply
operatingthetan
5 hours ago
[-]
Explain how? Even if you are using crons or heartbeats to reactivate the model they are still dependent on context windows that are quite small. With frontier models I still have to remind them how stuff works, stuff they forgot or focused on the wrong thing, etc.

Also every AI company is motivated to have us use their models _just enough_ to want to pay for them, but not more than that.

reply
booleandilemma
6 hours ago
[-]
It doesn't have to produce AGI and it could still ruin the lives of millions of people. Our society isn't ready for that kind of shock. We can't all be instagram influencers.
reply
josefritzishere
6 hours ago
[-]
My first thought was false flag. Is that too cynical?
reply
foota
6 hours ago
[-]
I would go for out of touch, not cynical. A lot of people really think AI is the devil.
reply
risyachka
6 hours ago
[-]
It will be hard to convince them otherwise when their jobs are replaced with AI, and they are in their late 40s or later - with no time to adjust and to learn new craft.
reply
swader999
2 hours ago
[-]
They or them? It's all of us I'd bet.
reply
polotics
6 hours ago
[-]
Possible, but unlikely. To organise such a stunt and keep undetected you're going to need other consigliere than what Sam's got I presume.
reply
josefritzishere
5 hours ago
[-]
Like another commenter wrote... anyone can cast a fireball. Sam has been called a sociopath by many who know him personally. So it seems more likely than it might be otherwise.
reply
ReptileMan
4 hours ago
[-]
Nope. So was mine.
reply
stevenwoo
3 hours ago
[-]
It kind of fits with the behavior he exhibited as reported by Farrow in New Yorker article.
reply
dlev_pika
2 hours ago
[-]
I’m surprised we haven’t heard more direct action incidents - there is no way the shameless behavior of our high profile oligarchs is not ruffling a few feathers too much.

Maybe they are just not reporting near misses

reply
boznz
6 hours ago
[-]
I guess this is what we get when the media and politicians go all in with their AI populist hate. I don't think I've seen a positive AI headline outside of the tech press, and even then they are pretty thin. Abundance and growing the pie for everyone is also an outcome if this is done right.
reply
acdha
3 hours ago
[-]
> Abundance and growing the pie for everyone is also an outcome if this is done right.

That’s like saying we don’t need minimum wage or unions because companies choosing to treat workers with respect is also a possible outcome. It’s technically true but once you go from “is this theoretically possible?” to “is this likely?” it becomes obvious that the answer is no. Most of the big AI backers are openly salivating at destroying millions of jobs, and they’re already evading taxes now so they’re not going to be funding UBI willingly — and if you have any doubt, look at where their political spending goes, consistently to the people who are doing their best to remove what small taxes they’re still paying and declaring war on the concept of regulated markets.

reply
lexicality
5 hours ago
[-]
> Abundance and growing the pie for everyone is also an outcome if this is done right.

Do you genuinely believe there's any chance that's going to happen?

reply
boznz
4 hours ago
[-]
I do, because the alternative is unthinkable.
reply
nickvec
3 hours ago
[-]
I would argue that "abundance and growing the pie for everyone" is even more unfathomable given how things are structured currently. The wealth gap will continue to widen until something gives.
reply
DrProtic
3 hours ago
[-]
Can’t believe your comment is being downvoted.

Covid clearly showed how crisis can only benefit the rich and powerful.

AI being used to cut the headcount can somehow be, good? It will just fill the pockets of the powerful.

reply
impossiblefork
2 hours ago
[-]
Why do you think that the fact that the alternative is unthinkable is a reason it won't happen?

Are you also sure that it is unthinkable to those running these companies? I wouldn't be surprised if these models end up being used for internal security-- that people would try to keep an extremely unequal society stable by surveillance and massive analysis capabilities. I think it's apparent that some use of this sort already occurs and that these companies are already participating.

reply
array_key_first
3 hours ago
[-]
Well then given that one side is "the situation remains neutral or very slightly improves" and the other side is "unthinkable atrocities", I think it's only rational to focus on the "unthinkable atrocities" part. Ideally, we should be focusing all our energy into making sure that doesn't happen.
reply
jyounker
2 hours ago
[-]
Closing your eyes doesn't make the danger go away.
reply
yoyohello13
2 hours ago
[-]
What? This makes no sense. You don't like the alternative so you choose to believe in the best outcome?
reply
senordevnyc
5 hours ago
[-]
Looking at the last few hundred years of our civilization, absolutely!
reply
mghackerlady
5 hours ago
[-]
or, here me out, people are just sick of it? They don't care that their masters are sniffing eachothers ai powered farts to keep the economy afloat on the promise of their obsolescence. Sure, in theory it could be good for them, they can get more work done quickly, but why would they be kept alive if their owners no longer need to rely on them. The ideal business has no expenses, workers are one of those. Combine that with everything being shit nowadays, yeah, I can't blame whoever did this
reply
gdulli
2 hours ago
[-]
Was social media "done right"?
reply
archagon
3 hours ago
[-]
I think the media and politicians are reflecting popular sentiment, not the other way around.
reply