Grok got me to demand the CT scan that saved my life from a ruptured appendix
15 points
4 hours ago
| 6 comments
| old.reddit.com
| HN
cluckindan
3 hours ago
[-]
This smells of freshly laid astroturf.

xAI wants to know your symptoms :)

reply
delichon
3 hours ago
[-]
> Came home, opened a year-long chat I have with Grok, described everything.

Is it common for people to do that? I've had such problems with long context windows that I start a new chat on every topic change. I'd be particularly concerned about such corruption when exploring such an urgent, painful problem.

reply
hodgehog11
3 hours ago
[-]
Surprisingly, many do. When I mention to people (family, friends, etc.) that they should open new chat windows for new topics due to memory corruption, it's pretty clear they never even considered the possibility that the model can go off the rails if the chat is too long. Later I often get a comment like "wow, I kept thinking this AI stuff was rubbish, but it's really good now!".
reply
tyleo
2 hours ago
[-]
Whenever I’ve read an article about someone who developed psychological disorders in the presence of AI, they had one of these long chats.

It’s made me want a long chat just to understand whether there’s something meaningfully different about the experience.

reply
wryoak
1 hour ago
[-]
I had a similar experience. I think doctors are just wary of cutting people open due to the risks. Anyway, long history of males rupturing their appendices in my family before puberty. The doctors were not convinced by the family history, however. When they finally pulled it out, after months of painful inflammation episodes they dismissed in a variety of ways, they saw its unusual size (3x!) and scar tissue and fought insurance to get the op covered. This was back in the nineties tho, before LLMs
reply
armchairhacker
1 hour ago
[-]
I've heard similar stories involving ChatGPT. I believe at least some of them. Why not? ERs are known to misdiagnose patients who end up having serious conditions; there are examples of this exact scenario (misdiagnosed appendicitis as a stomach bug). It's also known that Google searching symptoms will pull up serious diseases, which someone with said symptoms may or may not actually have, so an LLM also will.

Ultimately, effectively diagnosing patients from descriptions is impossible, because two can give the same description, where one has a mild condition and the other serious. More tests would resolve most ambiguity, but most ERs are too overworked, so you only get more tests if you really advocate for yourself (and/or are rich). A stomach bug can have nearly the same symptoms as the initial stages of appendicitis, and Grok (allegedly) told OP he needs to advocate for a CT.

I do see an issue, where LLMs tell too many people they have a serious condition and must go to the ER and order unnecessary tests, where a doctor would say to those people "you're fine" and they really are fine. People would end up anxious and angry at doctors over nothing, and ERs would end up overwhelmed with so many spurious tests and requests, that they would miss necessary ones. I say "would" but this situation has already started, except it's Google that is telling people to panic. Ultimately, there are too many people and not enough doctors and equipment (and too many tests may itself be unhealthy), so people who may have serious conditions aren't tested for them. I don't know the solution, but do want to point out, with people saying LLMs will replace mass sections of the workforce; why don't we as a society train more nurses and doctors?

reply
arghandugh
2 hours ago
[-]
Grok is the product of a literal mass-murdering Nazi, therefore this patron of a mass-murdering Nazi should probably consider using a different product.
reply
extasia
1 hour ago
[-]
politics really is the mind-killer
reply
yawpitch
1 hour ago
[-]
Sure am glad that in the pre-Grok days I just happened to already know the signs of one of the world’s most common gastro-intestinal emergencies. I’m honestly not sure how anyone else survived prElon.
reply