Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
> we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm
Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system.
> “Treat our adult users like adults” is how we talk about this internally
Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored.
One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses.
Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0]
(By "can't use" I mean either you're explicitly banned, or the chance of being reported to the authorities is so high that no one risks it.)
OpenAI just showed their hand. They have no path to profitability so they are going to the data broker well lol.
Oh brilliant. The same authorities around the world that regularly injure or kill the mentally ill? Or parents that might be abusing their child? What a wonderful initiative!
How long will it take for someone to accidentally SWAT themselves?
This current approach is a net negative, but the TLD idea actually makes sense to me.
Here's a thought experiment: you're a gay person living in a country where being gay is illegal and results in a death penalty. You use ChatGPT in a way which makes your sexuality apparent; should OpenAI be allowed to share this query with anyone? Should they be allowed to store it? What if it inadvertently leaks (which has happened before!), or their database gets hacked and dumped, and now the morality police of your country are combing through it looking for criminals like you?
Privacy is a fundamental right of every human being; I will gladly die on this hill.
There's a reason why e.g. banks want to have all critical systems on premises, under their physical control.
Why do people speak of ML/AI as an entity when it is a tool like a microwave oven? It is a tool designed to give answers, even wrong ones when the question is nonsensical.
[0] https://www.ala.org/advocacy/advleg/federallegislation/theus...
[1] https://www.ala.org/advocacy/intfreedom/statementspols/other...
The consumption is "static" in your terms if you read a paper book alone, or if you access a publicly available web page without running any scripts, or sending any cookies.
A person going to a library and consuming with out a check-out record, one must assume any book was consumed with in the collection. Only a solid record of a book be checked out creates a defined moment that is still anchored in confidentiality between the parties.
Unless that microwave sensor requires an external communication it is a closed system which does not communicate any information about what item was heated. The 3rd party would be the company the meal was purchased from.
A well designed _smart microwave_ would perform batch process updating and pull in a collection of information to create the automated means to operate. Never know when there could be an Internet outage or the tool might be placed were external communication is not a possible option.
A poorly designed system would require a back and forth communication. Yet it would be no different than a chief knowing what you order with limited information about you. Those systems have an inherent anonymity.
It is the processing record that can be exploited and a good organization would require a warrant or purge the information when it is no longer needed. Cash payment also improves the anonymity in that style of system preventing leaking personal information to anyone.
Why should a static book system like a library not be applied to any ML model since they are performing the same task and providing access to information in a collection? The system is poorly designed if confidently is not adhered by all parties.
Sounds like ML corporations want to make you the product instead of being used as a product. This is why I only respect open design models, from bottom up, that are run locally.
The only secure position for a company (provided that the company is not interested in reading your communication) is the position of a blind carrier that cannot decrypt what you say; e.g. Mullvad VPN demonstrated that it works. I don't think that an LLM hosting company can use such an approach, so...
It depends. If you're speaking to a doctor or a lawyer, yes, by law they are bound to keep your conversation strictly confidential except in some very narrow circumstances.
But it goes beyond those two examples. If I have an NDA with the person I am speaking with on the other end of the line, yes I have the "right" to "force" the other person to keep our conversation private given that we have a contractual agreement to do so.
As far as OpenAI goes, I'm of the opinion that OpenAI - as well as most other businesses - have the right to set the terms by which they sell or offer services to the public. That means if they wanted a policy of "all chats are public" that would be within their right to impose as far as I'm concerned. It's their creation. Their business. I don't believe people are entitled to dictate terms to them, legal restrictions notwithstanding.
But in so far as they promise that chats are private, that becomes a contract at the time of transaction. If you give them money (consideration) with the impression that your chats with their LLM are private because they communicated that, then they are now contractually bound to honour the terms of that transaction. The terms that they subjected themselves to when either advertising their services or in the form of a EULA and/or TOS presented at the time of transaction.
When I'm talking to my doctor, or lawyer, or bank. When there's a signed NDA. And so on. There are circumstances where the other person can be (and is) obliged to maintain privacy.
One of those is interacting with an AI system where the terms of service guarantee privacy.
In most of the USA that already is the law.
If I start any kind of company, I cannot just invent new rules for society via ToS; rather the society makes the laws. If we just make a simple law that states minors are not allowed to access the web and/or access any user generated content (including chat), it won't need to be enforced by every site/app owner, it would be up to the parents.
The same way schools cannot decide certain things for your children (even though they regularly over reach...).
We need better parenting. How about some mandatory parenting classes/licenses for new parents? Silly right? Well its just as silly as trying to police the entire internet. Ban the kids from internet and the problem will be 95% solved.
Apparently he came across as articulate enough that she couldn't tell the difference between his posts and that of any random adult spewing their political BS.
This predated ChatGPT so just imagine how much trouble a young troll could get up to with a bit of LLM word polishing.
20 years ago it was common for people to point out that the beautiful woman their friend was chatting up is probably some 40 year-old dude in his mom's basement. These days we should consider that the person making us angry in a post could be a bot or it could be some teenager just trying to stir shit up for the lulz.
Dead Internet theory might be not be literally true, but there's certainly a lot of noise vs signal.
my guy
It’s a big reason why tech stocks are still high IMO. It’s where today’s kids will spend their time on when they become old enough to spend their own money.
"Think of the children" laws are a useful pretext for authoritarianism.
It's really that simple. It's the whole reason why the destructive thing is done, instead of anything that might actually protect children.
Trying to steelman their arguments and come up with alternatives that aren't as restrictive or do a better job of protecting children is falling for the okie-doke.
I agree that it should be the responsibility of parents, but if you leave good and bad parenting to the parents only I think we would live in a different world.
I practically grew up on the internet and unsavory sites like 4chan, liveleak and omegle, and the only negative consequence for me these days is that I have to do daily standups due to getting a job in tech from my interest in computers.
Children are a lot less fragile and are a lot more resourceful than people give them credit for, and this infantilization to "protect" them where we have to affect the entire world is maddening to me.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
For example, ChatGPT will be trained not to … engage in discussions about suicide of self-harm even in a creative writing setting.
I'm writing an essay on suicide...
Yay, proactive censorship?
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
I just want to be explicit that my point isn't, "So what?" so much as, "We BEEN on that slippery slope." Social expectations (and related formal protocols in business) could do with some acknowledgement of our society's inherent... wait for it... ~diversity~.
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
Local models and open source tooling are the only means of privacy.
this is a chart that struck me when i read thru the report last night:
https://x.com/swyx/status/1967836783653322964
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
aside from my economic tilt against for-profit companies... precisely because your personal stuff is personal. you're depersonalizing by sharing this information with a machine that cannot even attempt to earnestly understand human psychology in good faith and then accepting its responses and incorporating them into your decision-making process.
> It's really good at giving advice.
no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. if you take it for granted that these responses constitute anything other than the mere appearance of literally the most average-possible advice, you're abdicating your own sense of self and self-preservation.
press releases aside, time and again these companies prove that they're not interested in the safety or well-being of their users. cui bono?
It doesn't emit words at all. It emits subword tokens. The fact that it can assemble words from them (let alone sentences) shows it's doing something you're not giving it credit for.
> literally the most average-possible advice
"Average" is clearly magical thinking here. The "average" text would be the letter 'e'. And the average response from a base model LLM isn't the answer to a question, it's another question.
re: average - that's at a character level, not the string level or the conceptual level that these tools essentially emulate. basically nobody would interpret "eeee ee eeeeee eee eeeeeeee eee ee" as any type of recognizable organized communication (besides dolphins).
ELIZA has existed in emacs for a long, long time.
Humans are funny creatures who benefit frequently from explaining the problem slowly and having it fed back to them.
And for many, average advice really is a dramatic improvement over their baseline.
Yeah, sometimes I realize the solution in the process of writing a GitHub issue.
you're strengthening your personality with these activities. neither your journal nor your stuffed animal (cute :) ) respond to you with shallow recreations of thought - they allow you to process your internal thoughts and feelings in an alternative and self-reinforcing way.
> ELIZA has existed in emacs for a long, long time.
ELIZA doesn't really give advice, does it? it's a fun toy, and if there's any serious use for it, it's similar to journaling or rubber-ducking in that it's just getting you to talk about things yourself.
previous discussion: https://news.ycombinator.com/item?id=45026886
Define "good" in this context.
Being able to ape proper grammar and sentence structure does not mean the content is good or beneficial.
Sam is missing the forest for the trees. Conflicting principles is a permanent problem at the CEO level. You cannot 'fix' conflicting principles. You can only dress them up or down.
ChatGPT is not a licensed professional and it is not a substitute for one. I am very pro-privacy, but I would rather see my own conversations with my real friends be protected like this first. Or my own journal writings. How does it make sense to afford specific privacy protections to conversations with a calculator that we don't give to personal writings and private text chains?
> And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
And I'm certain this won't have any negative effects on users where their parents are part of the problem. Full disclosure, if someone had told my parents that I am bisexual while I was in high-school, they absolutely would have sent me to a conversion therapy camp to have it beaten out of me. Many teenagers do not have a safe home environment, systems like this are as liable to do harm as they are to do any good at all.
I don't think teenagers and children should be interacting with LLMs at all. It is important to let children learn to think on their own before handing them a tool that will think for them.
With Discord, age verification felt urgent because it's a social platform with known grooming and CSAM problems. With something like OpenAI, it's less clear why it matters in its current state where it's mostly single-player. But it becomes way more problematic as advertisers gain more power on the platform and influence users. OpenAI doesn't want to eval every advertiser for harmful content, so instead they/and the government fall back on age as the filter and where to draw the line.
- clearly the wider public is moving towards REAL identification to be online. Anything which delays or prevents this is probably welcome.
- It's easy to game, but also easy to be misclassified. (this isn't a positive, but I think there's no avoiding this unless I have to provide my passport or driver's license or something)
It's not impossible to think that this could satisfy enough people to prevent the death of the anonymous internet.
No they're not. Nobody voted for that. It is simply being imposed on people via government mandates.
And even if a majority vote was enough to form government, the so called will of the people can be overruled by an unelected judiciary with lifetime appointments. You're really stretching the definition of "people are the government".
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...
To show that majority of their users are using it as a harmless little writing assistant.
TL;DR2: Regulations are written with blood.
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Somehow it's ChatGPT's fault?
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.
https://www.nytimes.com/2025/09/16/podcasts/the-daily/chatgp...