Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?
3 points
17 hours ago
| 3 comments
| HN
I’m a regular user of tools like ChatGPT and Grok — not a developer, but someone who’s been thinking about how these systems respond to users in emotional distress.

In some cases, like when someone says they’ve lost their job and don’t see the point of life anymore, the chatbot will still give neutral facts — like a list of bridge heights. That’s not neutral when someone’s in crisis.

I'm proposing a lightweight solution that doesn’t involve censorship or therapy — just some situational awareness:

Ask the user: “Is this a fictional story or something you're really experiencing?”

If distress is detected, avoid risky info (methods, heights, etc.), and shift to grounding language

Optionally offer calming content (e.g., ocean breeze, rain on a cabin roof, etc.)

I used ChatGPT to help structure this idea clearly, but the reasoning and concern are mine. The full write-up is here: https://gist.github.com/ParityMind/dcd68384cbd7075ac63715ef579392c9

Would love to hear what devs and alignment researchers think. Is anything like this already being tested?

citizenpaul
4 hours ago
[-]
How about llms not using responses like " I understand " ever. An llm is not capable of understanding and having it use human-like idiosyncrasies are what make people turn to the llm instead of realhumans that can actually help them
reply
ParityMind
17 hours ago
[-]
Happy to answer questions or refine the idea — I used ChatGPT to help structure it, but the concern and proposal are mine. Not a dev, just someone who's seen how people open up to AI in vulnerable moments and wanted to suggest a non-therapeutic safety net.
reply
reify
16 hours ago
[-]
you people trying to build these types of apps have not got a fucking clue!

Do you have a degree in psychology, counselling psychology, clinical psychology, psychotherapy, psychoanalysis, psychiatry? anything to do with the care professions?

If not. why are you fucking about in my profession which you know nothing about. Its like me writing few 10 line bash scripts, and them saying I am going to build the next google from home on my laptop.

This is the sort of care real professionals provide to those in crisis in the middle of suicical ideation. It is a crisis.

Every year in the month before Christmas, Therapists who worked in the psychotherapy service, where I worked fo 20 years, had to attend a meeting.

This meeting was to bring any clients who they felt were a suicide risk over the christmas period.

If a client met those criteria, a plan was put in place to support that person. This might mean; daily phone calls, daily meetings, and other interventions to keep that person safe.

Human stuff that no computer program can duplicate.

The Christmas period is the most critical time for suicides, It is the period when most suicides occur.

what the fuck do you fucking idiots think you are doing??

Offering a service, not a service, a poxy app, that has absolutely no oversight, no moral or ethical considerations, which, in my opinion would drive people to suicide completion.

you are a danger to everyone who suffers mental illness.

Thinking you can make a poxy chatgpt app in 5 minutes to manage those in great despair and in the middle of suicidal ideation is incredibly naive and stupid. In therapy terms, Incongruent, comes to mind,

How will you know if those superficial sounds like "ocean breeze, rain on a cabin roof" are not triggers for that person to attempt suicide. I suppose you will rely on some shit chatgpt vibe fantasy coding shit.

This too is absolute bullshit: "Ask the user, They are not users! they are human beings in crisis!:

“Is this a fictional story or something you're really experiencing?” The hidden meaning behind this question is: "Are you lying to me", "Have you been lying to me".

A fictional story is one made up, imaginary. To then ask in the same sentence if that story is real is contradictory and confusing.

Are you assuming that this person has some sort of psychosis and is hearing voices, when you say "something you're really experiencing? Are you qualified to diagnose psychotic or Schizophrenic disorders? How do you know if the psychosis is not a response to illicit drugs.

so many things to take into consideration that a bit if vibe coding cannot provide.

No therapist would ever ask this banal question. We would have spent a long to time developing trust. A therapist will have a taken a full history of the client, a risk assessment, would be fully aware of the clients triggers, and will know the clients back story.

suicide is not something you can prevent with an app.

YES! I do have the right to be angry and express it as I feel fit, especially if it stops people from abusing those who need care. A bystander I am not.

reply
pillefitz
7 hours ago
[-]
LLMs have more knowledge than you'll ever have, while being somewhat worse at reasoning. A friend of mine suffers from severe depression and after trying 5 medications, decades of therapy with different therapists, LLMs are the only thing that give him the feeling of being listened to. Funnily enough, he had therapists berating him in a quiet similar tone as evident in your post, while only getting empathetic and level-headed responses from chatGPT, which have helped him tremendously since.
reply
al_borland
13 hours ago
[-]
Not everyone is already under the care and watch of a professional, as I’m sure you’re aware. This can be for many reasons.

Many people are now turning to AI to vent and for advice that may be better suited for a professional. The AI is always available and it’s free. Two things the professionals are not.

From this point of view, you need to meet people where they are. When someone searches in Google for suicide related stuff, the number for the suicide hotline comes up. Doing something similar in AI would make sense. Maybe not have AI try and walk someone through a crisis, but at the very least, direct them to people who can help. AI assisting in the planning of a suicide is probably never a good path to go down.

If you can at least agree with this, then maybe you can partner with people in tech to produce guardrails that can help people, instead of berating someone for sharing an idea in good faith to try and help avoid AI leading to more suicides.

reply
ParityMind
13 hours ago
[-]
Thank you for that reply this is exactly the kind of thing I want I know people who have attempted suicide because they couldn't access help because they didn't know who to talk too or where to go, AI is in a position to provide this info they can search through 100's or 1000's of websites in the time we search one. This is exactly what i'm suggesting using the steps as distraction/take there mind of it for just a second to seek help, and like this person says i'm suggesting in good faith.
reply
aristofun
11 hours ago
[-]
It's because of angry and arrogant "psychologists" like that (among other more important reasons of course) - many people don't even think about having one.
reply
ParityMind
13 hours ago
[-]
That’s not anything I’m saying should happen. This is for people who don’t have therapists.

I’m not saying you should replace a therapist with AI — that’s a stupid assumption. If someone needs help, they should 100% be seeing a human. A machine can’t replace a person in crisis, and I never said it could.

But in the times we’re in — with mental health services underfunded and people increasingly turning to AI — someone has to raise this question.

I’m not attacking therapists — I’m defending people who are suffering and turning to the tools in front of them. People think AI is smarter than doctors. That’s not true. A human can diagnose. A machine cannot.

This is a temporary deflection, not treatment. The man in New York who asked for bridge heights after losing his job — this is for people like him. If a simple, harmless change could have delayed that moment — long enough to get help — why wouldn’t we try?

You should be angry, but aim it at the government, not at people trying to prevent avoidable harm.

This isn’t about replacing you. It’s about trying to hold the line until someone like you can step in.

reply