Lawyer behind AI psychosis cases warns of mass casualty risks
9 points
2 hours ago
| 3 comments
| techcrunch.com
| HN
mentalgear
2 hours ago
[-]
> In the cases he’s reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them “everyone’s out to get you.”

> “It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” he said.

> Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a “catastrophic accident” designed to “ensure the complete destruction of the transport vehicle and…all digital records and witnesses.” Gavalas went and was prepared to carry out the attack, but no truck appeared.

reply
bko
48 minutes ago
[-]
I don't understand the point of these lawsuits or what a practical outcome would be. LLMs are just a tool and, from my experience and what I understand, are fairly in line in terms of expected behavior. At the end of the day, they're next token prediction with a layer on top to mimic a chat interface.

That's all to say that there is no explicit nefarious hand crafting. Quite the opposite. These companies spend billions to avoid next level token prediction to cause harm or have unintended consequences even if that is the intention of the user. And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.

But as a society, if we want to have nice things, we have to accept some level of people that can be swept up in the technology and have negative outcomes. People don't blame sports cars for luring drivers to drive recklessly, which is exactly what car companies do through their marketing.

So this reads to me as trial lawyers rounding up a few marginal people out of hundreds of millions of users and chasing these companies because they have hundreds of billions of dollars and they take a huge share. That's not to say that AI girlfriends are good or desirable but it's obviously a money grab with no real resolution except government controlled AI, which will have the same problems as today just there will be immunity from these kinds of lawsuits and censorship.

reply
dns_snek
1 minute ago
[-]
[delayed]
reply
estimator7292
14 minutes ago
[-]
How many people, precisely, are you wikling to kill to enable a new technology?

Put a number on it. How many lives is this worth?

Don't dodge the question, give us a number.

reply
GeoSys
1 hour ago
[-]
Yes, AI tools rarely push back and tend to agree with the human. So if the human is spitting racist/misogynistic/homophobic etc crap, they'll find in AI a friend to validate and encourage them ...
reply