It had that strange vibe that seemed like they're looking for vulnerable people to prey on, almost like gambling ads do.
https://www.nbcnews.com/tech/tech-news/big-beautiful-bill-ai...
---
My original ramble below:
This is a Very HN Comment. The problem with healthcare in the US isn't that we don't let Sam Altman administer healthcare via souped-up text prediction machines. It's a disaster precisely because we let these greedy ghouls run the system under the guise of "saving money". In the end it costs significantly more money to insure a portion of the population than the big, inefficient, bureaucratic government providing baseline insurance for everyone.
The least bad healthcare systems in the world take out a significant amount of the profit motive. Not all regulation is good, but the US refuses to let go of the idea that all regulation is bad.
If LLMs are to be used in healthcare they should have an incredibly high bar of evidence to pass. Right now, there's no evidence that I'm aware of. Just as doctors need to prove themselves before being certified. Even then, we get bad doctors. What happens when an LLM advises a patient to kill themselves? Probably nothing. Corporations go unpunished in this country. At least bad practitioners are responsible for their actions.
Programmers and startup owners constantly claim their 2 weeks product will save the world, but must the time it does not. It is fine when talking about household light management, but half baked ai therapist is as much quack as any human fraudster.
Only difference is that human fraudsters can be prosecuted, but companies and startups demand themselves to be above the law.
If you’re going to have it pass tests, those should be graduation requirements, clinical training, and licensing exams.