This is due to having so many examples that not having advertising is the first step to having advertising, and that having advertising will be optimized for profit, and frustrate users.
Scary to think about, if moving away from "Don't be evil" is the precedent for an "AGI company"
In every tech generation, for good or bad, mainstream consumers choose the ad-based product over the paid product.
So every company that wants scale in the long-term ends up adopting an ad-based free tier to avoid becoming niche, it seems. Even the majority of HN users now appear to use gmail despite paid email hosts being incredibly cheap.
Edit: Not sure why the downvotes. Would you prefer that OpenAI leaves Google (who is ad-supported) to win the general public? I'm saying the above as someone who does purchase the ad-free plan when available, and uses paid email.
I tried to evaluate Gmail alternatives (Mxroute, Cranemail) and some VPS costs and just about something that most people might use for their use cases and actually "own" it (in sort of sometimes as much autonomy as Google might because I am sure that Google sometimes partners up with datacenters as well, technically being similar to colocation) but usually they are autonomous and give you far far more freedom than the arbitrary terms and conditions set by say google for gmail
If we do some cost analysis, I feel like these are gonna be cheap (for a frugal person like me who will try to cut extreme corners while still evaluating everything) or to a much more average person who might join a particular forum during black friday and just know one of the best ones or running deals everyday to even using one provider itself. The costs on average I feel like shouldn't exist 25/30$ per month for mail,domain & vps to host open source in, so in essense this is the cost of their privacy
For countries with a strong currency, this is such a great deal and they benefit greatly from something like this and they spend much more on far fewer impactful things than say one's privacy.
The problem isn't the pricing model, contary to that, the problem feels to me something deeper.
It feels something psychological. I observed that people buy twitter blue stars and discord nitros etc. (which can probably cost the same as if not more expensive than running one owns matrix/xmpp servers & mastodon which could provide unlimited freedom of modification instead)
The problem to me feels like people pay in this context, not because of the real value but of the apparent value instead.
For them the value of buying a checkmark and getting part of say 1 million or 100_000 members out of 100_000_000 (think twitter) would feel better than say being 1 out of 25_000/50_000 (mastodon running)
Why is that the case? Because I think what they are feeling is that they aren't thinking in percentages but they are thinking in numbers, they "beat" 90_000_000 people than being one out of a unique but small community (once again mastodon example where one would feel less satisfied if they recognize that they are 100 out of 50_000 or similar), Not unless the goal of privacy is something that they assign more value than the apparent other psychological value.
So coming back to the twitter example, People would be likely and willing to pay more money not owning anything on a platform where the deal should suck in real value and just about everything combined but just because of numbers/psychology effect, the deal can make sense. (Of course, there is also the fact of influence which is once again introduced by the fact that these websites create an artificial scarcity (of something unlimited) & fulfill it and the people who get that feel more rare and they get more influence, that's how people feel in discord, for the very least part)
Another issue with this system is that since it relies on having massive amounts of people & people wanting to pay in a weird deal after masses, they have to offset costs till then and mostly the scope of influence of these companies grow and this attracts the type of people notorious in the VC industry and thus this is linked to VC industry which I feel like causes it to focus on growth and then maximally renting out profit almost being a landlord something which I feel like even Adam Smith wouldn't really appreciate but that's another point for another day.
My point is, that evil becomes an emergent property out of such system even if better opinions arise because better opinions still require some friction in start but they are predictable and the definition of "evil" has in this case the definition of starting out smooth and ending roughly (Take Google company as an example, reddit), this is "enshittenification"
So people are more likely to support evil if other people support evil as well and the definition of evil is somehow based on common morals and our morals have simply not catched up to these technological advancements in the sense that most people also aren't aware of the extent of damage/privacy breaches and since these companies now gain influence/power, lobbying efforts and lack of information regarding it themselves feels easier because they themselves are becoming the landlords of information.
So Is this path Inevitable, No, not really. Previously I mentioned the 30$ but what if I tell you that companies like proton can have deals where you still get privacy without the tech know-how so it kmight be good for the average person and people are backlashing but only because if they know all things I said prior (in their own way) and the value of privacy starts to rise
I definitely feel like there is some psychological effect to this following the mass and I am sure that these companies deploy other psychologists as well and in a way, our brains still run on primordial hardware thinking we are in jungles hunting today or we might die tomorrow if we don't get food but now we have to think for 10-20 years ahead.
So I feel like as much as we Hackernews might like to admit we are smart. I feel like admitting that the amount of psychological research I feel like put into algorithms is also precisely the reason why even we of all people might use gmail.
I don't believe the answer is because its a superior product but rather the psychological and all the other reasons I mentioned and this is also precisely why the small computing movement or indie computing movement (where Individuals like you and me create computing businesses/services where once again you and me can play a part of) as ompared to the large tech behemoths
Honestly thinking about it, like we say to combat fire with fire, should we need to combat psychology with psychology. Effectively creating a movement which can be "viral" using these social media as their hosts to spread a positive idea instead of a negative one which could effectively limit the influence of algorithm itself.
In fact the anger against such system is so much that even just a well intentioned idea like just "clippy" became a movement which amassed atleast millions in a similar fashion.
So I guess we need more Clippy like movements and we need psychologists to help us develop it so that we can move our collective energy into it instead of diversifying it and going nowhere.
Pardon me if this might feel a little off topic since I haven't re-read the post and I have just went with the flow of just writing whatever came in my head after talking to myself once about it in my head as well as the idea of an indie tech movment is something that I deeply think about from time to time.
[0] https://indieresearch.net/2014/03/30/advertising-and-mixed-m...
> Ads are always separate and clearly labeled.
Indeed. Let's look at Google's launch of Adwords in October 2000:
> Google’s quick-loading AdWords text ads appear to the right of the Google search results and are highlighted as sponsored links, clearly separate from the search results.
https://googlepress.blogspot.com/2000/10/google-launches-sel...
Things evolved from there, and that's likely here, as well, I think.
The enshitificstion begins.
The only saving grace is the promise to have an ad-free tier.
If something is valuable to you, paying for it to not have ads is very reasonable.
Fixed that for you, and I agree fully with that assessment.
I don't think degradation or decay capture this...those are more associated with a process in nature, or due to the laws of physics, but especially something unintentional (like "bit rot").
I like Cory Doctorow, so I might be a bit biased here. Would be interested in alternatives that capture the intentional aspect.
Now you may say that sucking less than Microsoft and OpenAI really isn't an high bar at all. And I fully agree with that.
> Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission
Wow that is how Google looks these days?
Step 2: Sell the top result slot.
Step 3: Profit.
(At which point will malignant/benevolent AI agents take over from us mere mortals poisoning the well and make it all useless?)
https://www.theregister.com/2026/01/11/industry_insiders_see...
Theodore Roosevelt would own you at Golden Eye.
ChatGPT is a useful product, which they're monetising in a well-travelled internet company way. The bad news is you're going to have ads in your ChatGPT in 2030. The good news is you're still going to have a job in 2030.
Having revenue from their free users might can just be a way to make it more sustainable. And/Or make fundraising easier from investors (which has immediate benefit).
Seeing the message "you're reached your limit..." makes free users switch to other AI providers, and ads are a way to fund higher limits. Their prime competitor, Google, has ad income from users so has an advantage.
That's a Netflix + Hulu subscription - with ads in both. Before streaming people regularly paid $50/mo (not adjusted for inflation) for cable TV with ads.
While it's easy to bemoan Google pushing ads into every corner of our digital lives, I think they arguably offered an unprecedented level of services relative to the number of ads, and we all got used to that.
Now whether OpenAI could ever push enough ads to make a profit: I have no idea! It's very interesting to see this race actually start.
The combination of technical prowess and relative wealth of the average HN commenter means I bet we see 1/100th the ads of the average consumer. It's wild out there.
> we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.
There is a severe disjoint between these two statements: the advertiser now knows what your conversation was about! This gives a lot of leverage to ad campaigns to design the targeting criteria very specifically crafted to identify the exact behavioral and interest segments they want.
Any data exfiltration or reporting on the users would quickly be developed by the industry to merge this information and improve inferences with confidence values on target populations/individuals.
Anyways, I think they will be doing both. So conversation data would not be sold to advertisers, but statistics will be given freely.
Edit: they made sure to use the word "trust" 5 times because nothing is more trustworthy than someone telling you how trustworthy they are.
Reminds me: "we and our 947 partners value your privacy"
Are they mincing words here? By selling your data they mean they'll never package the raw chats and send them whoever is buying ads. Ok, neither does Google. But they'll clearly build detailed profiles on every preference or product you mention, your age, your location, etc. so they know what ads to show you? "See this is not your data, it's just preference bits".
I'd guess an advertiser can ask OpenAI "show this ad to people between 18-34?", and then certainly anyone who clicks and then buys they'd know is 18-34 since they knew they came from the ad. But that there's no way for advertisers to directly buy a list of folks who are 18-34 but don't buy something from their website.
That's how it often works and seems in the spirit of the sentence you quoted.
So yes, it sounds like they'll do exactly what you say. And they will probably have much better user data than Google gets from search, because people divulge so much in chats. I wonder how creepily relevant these ads will get...
This single sentence probably took so many man-hours. I completely understand why they’re trying to integrate ads but this feels like a generational run for a company founded with the purpose of safely researching superintelligence.
That's how all the major ads platforms work. I don't personally agree that it constitutes "selling your data" but certainly people describe it that way for Google/Meta ads which function the same way. By framing it this way they're clearly trying to fool users who really bought into the messaging that Google et al literally sell user data when they only provide targeting. I guess the hope is that the cleaner reputation of OpenAI will mean people think there's some actual difference here.
Bitch, where's my money?
- sama0: "Every ad on Google is clearly marked and set apart from the actual search results." https://archive.md/fiK4E#selection-219.13-219.95
1: "Every Google result now looks like an ad" (which means every ad looks like a search result) https://news.ycombinator.com/item?id=22107823
2: "Google breaks 2005 promise never to show banner ads on search results" https://news.ycombinator.com/item?id=6605312
3: (2024) "OpenAI is developing Media Manager, a tool that will enable creators and content owners to tell us what they own and specify how they want their works to be included or excluded from machine learning research and training." https://openai.com/index/approach-to-data-and-ai/
4: (2023) "OpenAI promised 20% of its computing power to combat existential risks from AI — but never delivered" https://fortune.com/2024/05/21/openai-superalignment-20-comp...
5: (2025) "REPORT: The OpenAI Files Document Broken Promises" https://techoversight.org/2025/06/18/openai-files-report/
Maybe OpenAI does things different, but as soon as an OKR around ad performance gets committed to, the experience will degrade. Sure they're not selling data, however they'll almost certainly have a direct response communication where advertisers tell Open AI what and when youve interacted with their products. Ads will be placed and displayed in increasingly more aggressive positions, although it'll start out non intrusive.
Im curious how their targeting will work and how much control they'll give advertisers to start. Will they allow businesses of all sizes? Will they allow advertisers to control how their ads work? I bet Amazon is foaming at the mouth to get their products fed into chat gpt results.
I've heard this before from other companies.
OpenAI should just reject all advertisements. That's the only real solution.
And I'm skeptical ads will remain outside of the ChatGPT output for very long. You can hide a div tag, but you can't hide an advertisement streamlined into the "conversation" with ChatGPT. Is ChatGPT recommending product X because they're an advertiser, or because that's what it "learned" on the internet? Did it learn from another advertisement?
I fully expect them to exploit the plausible deniability.
I look grimly forward to the future of adblock, which I predict will literally involve a media interception and re-rendering agent that sits between us and everything we see, hear, read, etc. AR goggles that put beach pictures over bus stop posters and red squigglies under sentences with a high enough adtech confidence score. This shit's gonna get real weird in our lifetimes.
I've been bullish for OpenAI, but that's starting to fade. Sama is a master of the artful dodge, though, so it'll be interesting to see what happens. Between the burn rate and the lawsuits and the need for more compute, there's a ton of pressure on them right now.
(typically the ~entire general public chooses the ads: see how ad-supported products beat paid in virtually every generation of tech)
I've heard this before...
The free and $8 new “Go” tier will include ads.
> And though my lack of education hasn't hurt me none I can read the writing on the wall
We shall be good. Pinky promise.
The next step is to have them natively in the output. And it'll happen at a scale never seen.
Google had a lot more push-back, because they used to be the entity that linked to other websites, so them showing the AI interview was a change of path.
OpenAI embedding the advertisements in a natural way is much much easier for them. The public already expects links to products when they ask for advice, so why not change the text a little bit to glorify a product when you're asking for a comparison between product A & B.
Logically it seems they either have strategised this poorly (seems unlikely), they are under immense immediate financial pressure to produce revenue (I presume most likely) or there is simply no development on the horizon big enough to justify the shift - so just do it now.
So ChatGPT constantly ending all responses with tangents and followups is not for engagement?
Of course they are going to "anonymise" the chats, and only extract keywords summaries.
But, as some people are generally more candid with chatbots, de-anonymisation through keyword selection is trivially possible.
It won't just stay at ultra precise demographic selection (ie all males 35-40, living in london, worried about hair loss). They will offer scenarios that facebook/instagram could only infer/dream of
"middle aged woman with disposable income unhappy with spouse."
Where it gets interesting is how they will provide proof that the advert has landed/reached eyeballs.
When first trying 5.2, on a "Pro" plan, I was - and still am - able to trigger the shopping assistant via keyword-matching, even if the conversation context, or the prompt itself, is wildly inappropriate (suicide, racism, etc).
Keyword-matching seems a strange ad strategy for a (non-profit) company selling QKV. It's all very confusing!
Hopefully, for fans of personal super-assistants--and advertising--worldwide, this will improve now that ads have been formalised.
Also, anything that benefits OpenAI or keeps our runway just a bit longer is (by definition) in support of our mission, so we can do anything that we want and say that it is for the good of humanity.
I guess in the meantime, they will be able to use chat histories to personalize ads on a whole new level. I bet we will see some screenshots of uncomfortably relevant ads in the coming months.
> we’re also planning to start testing ads in the U.S. for the free and Go tiers, so more people can benefit from our tools with fewer usage limits or without having to pay
No, that is not why they're doing it. They're doing it to make money.
> Our mission is to ensure AGI benefits all of humanity
No, that is not their mission. Their mission is to make money.
If they wanted to benefit all humanity they would axe the entire operation, do a complete 180, and use all their money to fight as hard as they can against everyone else who is doing what they're doing now.
it's the same thing for them
they really want more engaged active (addicted) eyeballs, the more friction they can remove the easier it is to make this happen
From an ethical standpoint, I think it's .. murky. Not ads themselves, but because the AI is, at least partially, likely trained on data scraped from the web, which is then more or less regurgitated (in a personalized way) and then presented with ads that do not pay the original content creators. So it's kind of like, lets consume what other people created, repackage it, and then profit off of it.
They didn’t even start with free, already a paid subscription included.
(I continue to be shocked how many people—who should know better—are in denial that the entire "industry" of Generative AI is completely and utterly unsustainable and furthermore on a level of unsustainability we've never before seen in the history of computer technology.)
How far away are we from an offline model based ad blocker? Imagine a model trained to detect if a response contains ads or not and blocked it on the fly. Im not sure how else you could block ads embedded into responses.
I can't imagine what else anyone could have thought they were there for
so, google would appear to have boxed out openai from the #1 use case, and already have all the pieces in place to monetize it. This move by OAI isnt surprising, but is it too late to matter?
If you meant it in a different context, you didn't explain any of the actual context you had in mind.
The same sleight of hand that’s been used by surveillance capitalists for years. It’s not about “selling your data” because they have narrowly defined data to mean “the actual chats you have” and not “information we infer about you from your usage of the service,” which they do sell to advertisers in the form of your behavioral futures.
Fuck all this. OpenAI caved to surveillance capitalism in record time.
What you’re reacting to isn’t just “ads.” It’s the feeling of: Someone monetizing the collective output of human thought while quietly severing the link back to the humans who produced it.
That triggers a very old and very valid moral instinct.
Why “sleazy” is an accurate word here
“Sleazy” usually means: technically allowed strategically clever morally evasive
This means little. Anyone that has your data could potentially feed it in to do their own task.
I'm out.
I mean, they certainly know that introducing ads with be a huge motivation for consumers to seek other options.
The primary differentiator of OpenAI is first mover advantage; the product itself is not particularly unique anymore.
IMHO consumers will quickly realize that switching to an alternative AI provider is easy and probably fun.
This seems premature to give up their moat in the name of revenue. Are they feeling real financial pressure all of the sudden? Maybe I'm missing something. Looks like a big win for Google and Anthropic.
The whole company is built on lies and deception.
More related, I pay for Kagi, because google results are horrible.
More related, Chatgpt isn't the only model out there, and I've just recently stopped using 5 because it's just slow and there are other models that come back and work just as well. So when Chatgpt starts injecting crap, I'll just stop using them for something else.
What would you do if every time you walked into Walmart and the greeter spit in your face and told you to go F yourself, would you still shop there?
Big G will crush them. No "ensuring AGI benefits all of humanity." Just doing a desperate money grab.
You still can, no-one is stopping you now.
What I'm not okay with is being served adds using codex cli, or codex cli gather data outside of my context to send to advertisers. So as long as they're not doing that, I won't complain.
If they start doing that, I'll complain, and I'll need to more heavily sandbox it.
If no services remain I’ll run one of my own in the cloud or my server.
Fuck. Ads.
it doesn’t save my life, but at least i’m seeing more relevant ads now :) not getting detergent ads while searching for perfume is still nice, all things considered.
Also, your newspaper is selling the data points it has. If it had more, it would sell more. See: your local paper isn’t selling ads to a car wash six towns over. They do, however, sell ads that align with the political affinities of your local newsrooms area.
This is disingenuous. Putting up a billboard over a highway to make people aware of a certain brand of beer is not the same as building detailed profiles on people in order to sell to the highest bidder the opportunity to change your behavior right when you're likely to do so. But somehow, this user puts them together with the very convenient "regardless of scale."
Maybe you're OK with an entire industry that makes money trying to get you to do what they want -- buy what they want, think what they want. Maybe you're OK with your past behavior being written on a shadow ledger and sold the highest bidder, traded on the dark web, and used by governments. It's your right to be okay with that, since it's your life. But you being okay with that doesn't change the fact that this is a fundamentally different type of behavior than what is commonly called "advertising." It's a curious equivocation, this sane-washing, and it does make one wonder why an otherwise intelligent person feels to need to do it.
FTFY