Illinois Bans AI from Providing Therapy
19 points
by rntn
10 hours ago
| 4 comments
| gizmodo.com
| HN
duxup
9 hours ago
[-]
I got what looked like an AI powered therapy advertisement on youtube recently.

It had that strange vibe that seemed like they're looking for vulnerable people to prey on, almost like gambling ads do.

reply
xrd
9 hours ago
[-]
If this had made it into the BBB it would have be so bad. I'm glad states can regulate on their own.

https://www.nbcnews.com/tech/tech-news/big-beautiful-bill-ai...

reply
ktallett
5 hours ago
[-]
Until it is regulated and can pass safety tests, this is sensible. For those struggling with psychosis, or schizophrenic disorders, AI therapy could be incredibly harmful.
reply
SilverElfin
8 hours ago
[-]
I hope this isn’t the start of states banning AI in nonsensical ways when it could be a great way to boost health and reduce healthcare costs. There is so much regulatory capture and broken incentives in the healthcare system.
reply
McAlpine5892
4 hours ago
[-]
Is there any evidence that LLMs are safe to deploy as practitioners in healthcare settings? Until there is significant evidence, we shouldn't allow them.

---

My original ramble below:

This is a Very HN Comment. The problem with healthcare in the US isn't that we don't let Sam Altman administer healthcare via souped-up text prediction machines. It's a disaster precisely because we let these greedy ghouls run the system under the guise of "saving money". In the end it costs significantly more money to insure a portion of the population than the big, inefficient, bureaucratic government providing baseline insurance for everyone.

The least bad healthcare systems in the world take out a significant amount of the profit motive. Not all regulation is good, but the US refuses to let go of the idea that all regulation is bad.

If LLMs are to be used in healthcare they should have an incredibly high bar of evidence to pass. Right now, there's no evidence that I'm aware of. Just as doctors need to prove themselves before being certified. Even then, we get bad doctors. What happens when an LLM advises a patient to kill themselves? Probably nothing. Corporations go unpunished in this country. At least bad practitioners are responsible for their actions.

reply
watwut
7 hours ago
[-]
Have it pass the same set of tests as any other medical device.

Programmers and startup owners constantly claim their 2 weeks product will save the world, but must the time it does not. It is fine when talking about household light management, but half baked ai therapist is as much quack as any human fraudster.

Only difference is that human fraudsters can be prosecuted, but companies and startups demand themselves to be above the law.

reply
SilverElfin
7 hours ago
[-]
But why should things be locked down in the first place? Why do I need to go through doctors and insurance and all of this for simple diagnostic tests and the obvious prescriptions that are necessary? It’s so frustrating. Especially to do it repeatedly every so many months. My point is we are currently already in a state of regulatory capture. And with this new era of technology, we need to abandon that. Maybe not fully. But for many things.
reply
cestith
6 hours ago
[-]
This article isn’t about AI acting as a medical device. It’s about it acting as a mental health practitioner.

If you’re going to have it pass tests, those should be graduation requirements, clinical training, and licensing exams.

reply