How will OpenAI compete?
126 points
7 hours ago
| 27 comments
| ben-evans.com
| HN
daxfohl
5 minutes ago
[-]
I just wonder how long it'll take local models to be good enough for 99% of use cases. It seems like it has to happen sooner or later.

My hunch is that in five years we'll look back and see current OpenAI as something like a 1970's VAX system. Once PCs could do most of what they could, nobody wanted a VAX anymore. I have a hard time imagining that all the big players today will survive that shift. (And if that particular shift doesn't materialize, it's so early in the game; some other equally disruptive thing will.)

reply
shubhamjain
2 hours ago
[-]
Everyone is actually underestimating stickiness. The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.

I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.

reply
OscarTheGrinch
31 minutes ago
[-]
I think you're right about stickyness up to a point.

Cultural defaults seem unchangeable but then suddenly everyone knows, that's everyone knows, that OpenAI is passé.

OpenAI has a real chance to blow their lead, ending up in a hellish no-man's land by trying to please everyone: Not cool enough for normies, not safe enough for business, not radical enough for techies. Pick a lane or perish.

Not owning their own infrastructure, and being propped up by financial / valuation tricks are more red flags.

Being a first mover doesn't guarantee getting to the golden goose, remember MySpace.

reply
CharlesW
1 hour ago
[-]
> Everyone is actually underestimating stickiness.

I think you're underestimating how fickle consumers are, and how much their choices are based on fashion and emotion. A couple more of these, and OpenAI will find itself relegated to the kids' table with Grok and Perplexity. https://www.technologyreview.com/2025/08/15/1121900/gpt4o-gr...

reply
protocolture
51 minutes ago
[-]
> The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

People used to suggest this about MySpace.

reply
randerson
1 hour ago
[-]
In theory you can export your data from ChatGPT under Settings > Data Controls. In practice, I tried this recently and the download link was broken. Convenient bug I must say.
reply
akhilchaturvedi
32 minutes ago
[-]
Make sure you're logged in to chatgpt.com in the same browser you're using to access that link.
reply
svnt
19 minutes ago
[-]
How would you navigate to it if you were not?
reply
keyle
1 hour ago
[-]
It would literally take you 5 mins to set up your wife with a competing client for her needs.

Sure it's 'sticky' at least a little, but it's not a moat. A moat is a show stopper like they own you.

reply
atomicnumber3
53 minutes ago
[-]
Idk, habit and the devil you know are powerful as hell. Google has enshittified search nearly beyond imagination, but it's still where the vastly overwhelming majority of people search.
reply
tcoff91
49 minutes ago
[-]
What free search engine today performs significantly better? No seriously Google sucks and I want an alternative. Do I need to pay for Kagi to get decent search?
reply
daxfohl
53 minutes ago
[-]
Yahoo, altavista, askjeeves, Google

Friendster, MySpace, Facebook

Netscape, ie, chrome

Icq, aim, MSN messenger, a million other chat apps

First mover advantage doesn't last long

Very high chance that the winner in five years is a company that does not yet exist

reply
pm90
2 hours ago
[-]
> My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else.

Ads might change that. If we know anything, nobody beats Google with ad based monetization. OAI is absolutely correct to be scared.

reply
neya
1 hour ago
[-]
OpenAI is already building complex user models. And I mean, super detailed user models - where you are from, what you do, what are your most vulnerable weaknesses, what you care about the most and everything else. This is information even the world's largest advertising company would struggle to put together across their fragmented eco-system (Gmail, Search, etc), but OpenAI has all this on a silver platter. And that scares me, because, a lot of people use ChatGPT as a therapist. We know this because of their advertising intent which they've explicitly expressed. Advertising requires good user models to work (so advertisers can efficiently target their audience) and it is the only way to prove ROI to the advertisers. "But, OpenAI said they won't do targeted ads..". Remember, Google said "Don't be evil" once upon a time too..

That's ok, we use ChatGPT only for coding. We should be good, right? Umm, no. They already explicitly expressed the intention to take a percentage of your revenue if you shipped something with ChatGPT, so even the tech guys aren't safe.

"As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."

"Intelligence will follow the same path."

https://openai.com/index/a-business-that-scales-with-the-val...

So yes, OpenAI has the best chance to win on the consumer side than anyone else. But, that's not necessarily a good thing (and the OpenAI fanboys will hate me for pointing this out).

reply
thrwaway55
1 hour ago
[-]
Isn't half the appeal of AI that they can write a prompt like move all my text history from OpenAI to Claude and then they do it?
reply
GCUMstlyHarmls
31 minutes ago
[-]
But the (royal) Wife needs to 1) know that exporting is a concept, 2) automating an export is possible, 3) you could ask claude to do it, 4) what an API key is or how to connect services.

My mum, and probably nearly a billion other users, could probably imagine step 1 but not connect to step 2 beyond copy-paste. Most people are still out here sending screen shots of their phones instead of just copying a link or hitting "share" on the image.

reply
lll-o-lll
1 hour ago
[-]
> people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere.

I just asked it to build me a searchable indexed downloaded version of all my conversations. One shot, one html page, everything exported (json files).

I’m sure I could ask Claude to import it. I don’t see the moat.

reply
ziznznzb
1 hour ago
[-]
How do you know all your conversations are in there?

Honest question I have this issue a lot with AI claims. Nobody verifies the output.

reply
lll-o-lll
1 hour ago
[-]
I did verify the output. You can download your stuff via their api
reply
simonw
1 hour ago
[-]
So far I've not seen anyone complain that their conversations have gone missing. There's a GDPR-style export option that I've used a few times for my own.
reply
bdangubic
1 hour ago
[-]
there is no moat also because conversation history is useless. like saying “I cant move to DDG cause Google has my search history”
reply
bryanrasmussen
1 hour ago
[-]
https://myactivity.google.com/myactivity

it's not useless, although it used to be more useful than it is now.

reply
lelanthran
48 minutes ago
[-]
> The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.

> My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else.

Is she paying for it? Because as we have seen repeatedly in the past, paid products whither and die when Microsoft bundles a default replacement.

You need to provide a really good reason why this time its different.

reply
junipertea
36 minutes ago
[-]
I believe specifically for Microsoft, they did bundle a default replacement for chatGPT in a lot of different places (Bing chat, Copilot) which use OpenAI models! But the end product is notably worse than native interface. There is a bare-minimum-level of usability required.

For chat apps, good enough is good enough. For something as universally useful and easy to use as ChatGPT, the bar is higher. I don't want to comment on the financial feasibility, but whatever Microsoft put out has been a complete flop even when free, making ChatGPT $8 subscription seem worth it in comparison

reply
lelanthran
7 minutes ago
[-]
> But the end product is notably worse than native interface.

That was my point - a lot of superior products were eaten by poor bundled replacements.

Last I checked, copilot has more users than ChatGPT simply because users are using it from within Excel, Word, Outlook and Teams, without even knowing that they are using copilot. It's bundled into Windows.

Right now, copilot is more useful to users than ChatGPT because it is embedded into their workflows.

reply
akie
3 minutes ago
[-]
ChatGPT and all the competitors have the exact same UI and UX.
reply
nozzlegear
1 hour ago
[-]
Anecdata point: I canceled my ChatGPT pro subscription last year over some shitty thing Altman did at OpenAI and easily moved over to Claude. The only thing I took with me was the system prompt or whatever it's called, I couldn't care less about my conversation history. I'm planning to do the same thing with my Claude subscription if Anthropic kowtows to the Pentagon. These services are not sticky at all IMO.
reply
Scrapemist
9 minutes ago
[-]
Where are you thinking of moving to?
reply
foogazi
1 hour ago
[-]
My wife uses Google AI overview - as an extension of search - on a daily basis and then jumps to Gemini
reply
SecretDreams
1 hour ago
[-]
How much is your wife paying for the privilege to use OAI presently?
reply
gadflyinyoureye
1 hour ago
[-]
This is the real question. Is she willing to pay $20 per month when Google's Gemini is free? Google can remain irrational longer than OAI can remain solvent.
reply
casualscience
1 hour ago
[-]
Google's profits have been going up while 'giving away gemini for free', so I don't think they're 'being irrational', they're unit economics apparently work.
reply
smugma
1 hour ago
[-]
I understand the underlying quote but not how/why it’s being used here. How is Google giving Gemini away for free to undercut OAI irrational? Anticompetitive, maybe.
reply
fc417fc802
34 minutes ago
[-]
Because the quote is irrational/solvent so you have to stick with those words. The similarity is a failed attempt to wait out a disadvantageous price regardless of the specific reason driving said price.

Even in the context of the original quote the price is only "irrational" in the eyes of the person trying (and failing) to play the market. "But you can't do that, that doesn't make any sense!" spoken by a person who has failed to fully grasp the situation.

reply
svnt
14 minutes ago
[-]
It is just Google’s business model, and why OpenAI has to do ads better faster.

But you can bet there was more economic foresight going on at Google than OpenAI.

reply
SecretDreams
1 hour ago
[-]
Agree. And we don't even know if they're bleeding out doing it. Google is on more efficient hardware and they fully control their ecosystem. And that ecosystem can feed into and be fed by their other ecosystems. OAI just has LLMs.
reply
morkalork
2 hours ago
[-]
I commute on the train, I see students studying with it. I go for brunch on the weekend, I see parents consulting it while at the table with their infants. I'm at work, colleagues are using it all day. I leave work and I overhear the random woman smoking in the alleyway talking on her cellphone saying "so I asked chatgpt". It's mind-bogglingly pervasive, the last time something had such a seizmic cultural impact like this was I dunno, Facebook? And secondly, it's all one specific brand. I'm not encountering co-pilot or gemini in the meat-space.
reply
boxedemp
1 hour ago
[-]
My sister uses Gemini and calls it chat gpt. It's becoming a genericide.
reply
GCUMstlyHarmls
28 minutes ago
[-]
My aunt calls it "chat", "I asked chat", which is funny to my online-brain. Like she's a streamer with a permanent audience of 1. Hey chat, is this real?^1

1. https://knowyourmeme.com/memes/chat-is-this-real

reply
simonw
1 hour ago
[-]
I still think it's hilarious that a product name as awful as "ChatGPT" has become so ubiquitous.

I wonder what percentage of its users know what the GPT stands for, or even thought about it for a second?

reply
chii
1 hour ago
[-]
I mean, how is it any worse than 'google'?

chatgpt is generic (as in, no prior meaning attached, except for the few people in the world who understand what GPT stands for). It's simple - even a non-english speaker can say it easily, and doesn't require one to be native to know how to pronounce it (this is a difficult concept for a native english speaker to grok).

These features makes for a good name.

reply
simonw
59 minutes ago
[-]
"Google" at least doesn't have an acronym for "Generative pre-trained transformer" baked into it.
reply
goolz
1 hour ago
[-]
How many of those people are paying? I think many say “use ChatGPT” to mean any LLM. As you noted it seems you just see ChatGPT in the wild but that is anecdotal. It is certainly pervasive right now. But I know a lot of people currently switching to Gemini.

I personally prefer claude models for all my work. If I were them I would be very worried. They are never giving us AGI and I am skeptical they are worth .5 trillion. Their cash burn is insane. Once ads and price hikes come, people will migrate to companies that can still afford to subsidize (like Google).

Plus I heard they lowered projections recently? Sam honestly comes off as a grifter.

reply
hattmall
1 hour ago
[-]
I'm very similar to the OP here, always hear about ChatGPT rarely anything else. Most people are definitely not paying, but of the few that are paying, outside of software developers, they are all paying for ChatGPT exclusively. I don't know of anyone paying for the basic chat versions of other AIs. A few developers paying for Claude and Gemini, but I know hundreds of people that talk of ChatGPT and no other AI, again most not paying though.
reply
chillfox
1 hour ago
[-]
Outside of work I don't know anyone who pays for AI.

But I have noticed that everyone seems to be using ChatGPT as the generic term for AI. They will google something and then refer to the Gemini summary as "ChatGPT says...". I tried to find out what model/version one of my friends was using when he was talking about ChatGPT and it was "the free one that comes with Android"... So Gemini.

reply
hyperbovine
1 hour ago
[-]
Gemini is nearly unusable thanks to “subsidies”. I honestly don’t see what the path is to these companies making any money short of massive price hikes, or electricity suddenly becoming free.
reply
jen20
1 hour ago
[-]
I actually encountered this today - one of a group I am planning a trip with posted some of the breathless nonsense that ChatGPT produced ("you're not picking a hotel, you're picking a group dynamic..." and other such textual diarrhea).

It turned out the only reason ChatGPT was because it is free for small enough volume usage. My suggestion to see what Claude had to say instead was met with "huh, you have to pay for it?". It's not like these are people that can't afford $20 per month for a subscription, but it might be that these assistants aren't even worth that for typical "normie" use cases.

reply
morkalork
1 hour ago
[-]
Is it anecdotal? The observation isn't _my_ experience using it, or of _my friends_. I have no influence over who I see in public using it. I know it's not exactly a scientific study but it's still pretty damn good as a random sample. If I went outside and saw the sky was dark, cloudy and my face got wet, would you tell me it was anecdotal evidence when I say it's raining out?
reply
BreakingProd
13 minutes ago
[-]
Only if you said it was raining everywhere these days.
reply
SecretDreams
1 hour ago
[-]
Chatgpt is like "Jeep". My grandmother calls every suv a jeep. But they're not all jeeps. AI looks like chatgpt, but people are driving all sorts of different AIs.

I would guess OAI has no moat or stickiness beyond what governments and private companies will do to keep it afloat through equity and circular financing. Good enough AI is all most need, and they need it at the cheapest cost basis possible with the most convenient access.

Google will probably win on most of these fronts unless a coalition is formed to actively fight google at the business/government level. But, absent that, it will win out over oai and oai will probably bleed to death trying to become profitable.. whenever that happens. You'll likely see their talent and corresponding salaries shrink massively along this journey.

reply
esafak
45 minutes ago
[-]
And if you're Boris Johnson, it's pronounced like 'jeep' too!
reply
paxys
1 hour ago
[-]
Yup this is just another case of the HN bubble. I polled a bunch of non technical friends recently who I know use AI on a daily basis. Out of 10+ maybe 2 had ever heard of Claude, and no one had any interest in trying it.

ChapGPT has become the AI verb, and in the consumer space it is not getting dethroned.

reply
siliconc0w
18 minutes ago
[-]
I've churned off OAI in favor of maxing out Claude. Better coding model and less creepy engagement hacking in the chat interface.
reply
svnt
13 minutes ago
[-]
I get plenty of creepy engagement hacking with claude’s chat too now. Before memory it was better.
reply
theptip
2 hours ago
[-]
I think this take underestimates a couple points:

1) the opportunities for vertical integration are huge. Anthropic originally said they didn’t want to build IDEs, then realized the pivot to Claude Code was available to them. Likewise when one of these companies can gobble up Legal, Medical, etc why would they let companies like Harvey capture the margins?

2) oss models are 6-12 months behind the frontier because of distillation. If labs close their models the gap will widen. Once vertical integration kicks off, the distillation cost becomes higher, and the benefit of opening up generic APIs becomes lower.

I can imagine worlds where things don’t turn out this way, but I think folks are generally underrating the possibilities here.

reply
arctic-true
1 hour ago
[-]
To go vertical they’d need to illustrate the value-add, a problem that the vertical competitors already have. Why use Claude for Accountants at $300/month when regular Claude will do the same thing for much less? The stock answer is that Claude for Accountants keeps your data more secure and doesn’t train on it. But a) I think the enterprise consumer is much less likely to trust a model creator not to stick its hand in the cookie jar than a middleman who needs the trust to survive, and b) the vertical competitors typically don’t use the absolute most up-to-date models in their products anyway, so why not just go open-source and run everything in-house? 6 months is a long time in tech, but it’s the blink of an eye in most white-collar professions.
reply
danny_codes
1 hour ago
[-]
The question is always about performance plateau. If LLM performance plateaus, then OSS models will catch up. If there isn’t a plateau, then I can simply ask the super intelligent AI to distill itself, or tell me how to build a clone.

It’s ironic, if the promise of AGI were realized, all knowledge companies, including AI companies, become worthless

reply
ryanlitalien
1 hour ago
[-]
I speak native English and barebones high school Spanish. I recently visited Costa Rica and almost every time there was a language barrier issue (unknown word or phrase), the local folks opened ChatGPT, said what they were trying to say in Spanish and then had ChatGPT convert it to English. It was everywhere.
reply
arctic-true
1 hour ago
[-]
When OpenAI starts requiring a payment, or showing an ad before it starts translating, will they continue? Or will they use the Google Translate app, which can do this locally? (Or for that matter Gemini or Grok or whatever?)
reply
paxys
1 hour ago
[-]
“When Netflix starts showing ads who on earth will still use it?”

Everyone, it turns out. Same with Google. Same with YouTube. Same with Instagram, and the rest of the web.

Once people become dependent on ChatGPT (as they already are) watching a 30 second ad in the middle of a session will become second nature.

reply
protocolture
47 minutes ago
[-]
Netflix has a moat in the form of IP licensing restrictions.

Google and Youtube are preinstalled everywhere. Instagrams like 10 minutes old and has a major competitor in TikTok that they had to have eliminated/captured by the US government.

People wouldnt stay with Netflix if there was a cheap, legal alternative with the same content library.

reply
chii
1 hour ago
[-]
> said what they were trying to say in Spanish and then had ChatGPT convert it to English. It was everywhere.

i'm just so surprised they'd use chatgpt to do this, when it's quite as easily (and perhaps faster) to use google translate.

reply
DrammBA
1 hour ago
[-]
It's incredible that google translate had this moat for a decade (maybe more) including live translation but people prefer to use chatgpt now
reply
kshacker
1 hour ago
[-]
I have done that at my home. My wife calls maids. They are there. I need to go to restroom. Ask my wife. She is struggling to communicate. It took me 3 seconds to realize ChatGPT could help. And it did.
reply
hattmall
1 hour ago
[-]
Nice that ChatGPT does that, its also true that Google Translate and other APPs have had this functionality for a decade or more. I was getting live German translated on my phone in 2015 with no problems.
reply
freakynit
2 minutes ago
[-]
I have been using google lens heavily to scan posters/flyers/information displays in other languages and get it translated to english in like 2-3 seconds. So freakin helpful.
reply
agentifysh
51 minutes ago
[-]
I think this is the best article on open AI that I've ever read. A lot of content these days will try to paint OpenAI in sensational ways that really doesn't get to the bottom of whether open AI has an economic mode, and this article does a very thorough job of explaining why OpenAI doesn't have power like the other platforms.

And so this goes back to my theory that open AI's execution is basically to get it itself in a position where the market cannot afford to have it implode. Basically, it wants to or it needs to be too big to fail. And I think we're already kind of seeing the politicization, if you will, sort of the rocket race between two superpowers or large powers on the AI front, and I think that Might be a viable strategy.

reply
gradus_ad
2 hours ago
[-]
These very valid points apply to all companies trying to make money off of proprietary models, which means margins are going to collapse in a vicious price war that will make Uber vs Lyft seem tame.

As margins collapse capex will collapse. Unfortunately valuations have become so tied to AI hype any reduction in capex will signal maybe the hype has gotten ahead of itself, meaning valuations have gotten ahead of themselves. So capex keeps escalating.

None of this takes into account the hoarding effects at play with regards to GPU acquisition. It's really a dangerous situation the industry is caught in.

reply
wombatpm
2 hours ago
[-]
Couple of observations:

Companies use to hoard talent. Now they are hoarding compute, RAM, and GPUs.

Deepseek showed that there are possibly less expensive ways to train, meaning the future eye watering expenses may not happen.

Bigger models may not scale. The future may be federations of smaller expert models. Chat GPTX doesn’t need to know everything about mental health, it just needs to recognize the the Sigmund von Shrink mental health model needs to answer some of my questions.

reply
chipgap98
2 hours ago
[-]
Deepseek showed that distillation is possible. Their results are possible without someone else doing the leading edge training
reply
throwaway13337
1 hour ago
[-]
These sorts of doom articles are interesting in that they are from the perspective of tech company valuations. Why is this the important perspective?

For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.

Maybe no one will have 'the ability to make people do something that they don't want to do' sort of power with this next stage in computing.

Sounds good to me.

reply
seizethecheese
40 minutes ago
[-]
Great article, but I think this is a stretch: "nor does OpenAI have consumer products on top of the models themselves that have product-market fit."

I would argue chatgpt is in the top 10 products of all time with regard to product market fit.

reply
npinsker
9 minutes ago
[-]
Isn't this kind of splitting hairs? Technically you're right, but he's obviously talking about a product that itself, independently from its underlying model, has a "strong, clear competitive lead" over would-be competitors.
reply
dlev_pika
51 minutes ago
[-]
> what a platform really achieves is to harness the creative energy of the entire tech industry, so that you don’t have to invent everything yourself and massively more stuff gets built at massive scale

I hear this, but every time I look the platforms have captured another use case that the startup ecosystem built (eg images, knowledge summarization, coding, music).

The sector is already littered with the corpses of the innovators that got swallowed by the platforms’ aggressiveness to do it all.

reply
rafaelmn
2 hours ago
[-]
I keep hearing about how the app integrations will be where the AI value is and then I see the actual app integrations and they are between useless and mildly helpful.

From what I can see Anthropic's big bet is that they will solve computer use and be able to act as an autonomous agent. Not so sure how fast they will progress on that. OpenAI on the other hand - I have no idea what they are planning - all I'm reading is AI porn and ads.

Google seems to be lackluster at executing with Gemini but they are in the best position to win this whole thing - they have so much data (index of the web, youtube, maps) and so many ways to capitalize on the models - it's honestly shocking how bad they are at creating/monetizing AI products.

reply
vrosas
1 hour ago
[-]
Google is doing a much better job integrating AI into existing products. Gemini CLI and such seem just like a way to keep the leading competitors humble (a la iOS vs android). They're also building AI tooling tailored to specific companies (like the Goldman thing just announced) and have the cloud infra to back it up. I really only see Anthropic and Google surviving in 10 years.
reply
edgyquant
2 hours ago
[-]
Ai porn and ads may be a bigger market than anthropics b2b
reply
jjmarr
1 hour ago
[-]
OpenRouter top apps are 50/50 between AI girlfriends and coding assistants as a general rule.
reply
sinenomine
2 hours ago
[-]
People underestimate the lead OAI has with their post-5.2 models. The author does not strike me as someone who closely follows the progress frontier labs make in US and around the world.
reply
energy123
1 hour ago
[-]
It's a joint ignorance of how these frontier models get baked and what consumers want.

Many pundits think it's just a matter of scraping the internet and having a few ML scientists run ablation experiments to tune hyperparameters. That hasn't been true for over a year. The current requirements are more org-scale, more payoff from scale, more moat. The main legitimate competitive threat is adversarial distillation.

Many pundits also think that consumers don't want to pay a premium for small differences on the margin. That is very wrong-headed. I pay $200/month to a frontier lab because, even though it's only a few % higher in benchmark scores, it is 5x more useful on the margin.

reply
svnt
10 minutes ago
[-]
It is the benchmark error rate, not the benchmark success %, that we actually trip up on.

Going from 85% to 90% is possibly 1/3 fewer errors or even higher, depending on the distribution of work you’re doing.

reply
nick32661123
51 minutes ago
[-]
You pay to OpenAI or which one do you use? Do you switch regularly?
reply
hyperbovine
1 hour ago
[-]
Agreed, compare the frontier models from Google and OAI. It’s like night and day. Anyone who says “the tech has caught up” has not spent even one day using Gemini 3.1 to try and accomplish something complicated.
reply
modeless
1 hour ago
[-]
> The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable.

I think this is clearly wrong. Users provide lots of data useful for making the models better and that is already being leveraged today. It seems like network effects are likely in the future too. And they have several ways to get stickiness including memory.

reply
consumer451
1 hour ago
[-]
https://paulgraham.com/fundraising.html

I would love to dunk on this or something, but the lesson is that it's all about distribution.

Sama is really good at that, and also.. gotta give props for a lot of forward thinking like the orb, which now makes a lot of sense to me, as non-Apple/Google proof of personhood.

reply
0xbadcafebee
1 hour ago
[-]
OpenAI lost the race to nerds' hearts. In the latest benchmarks, OpenAI is simultaneously cheaper (like 50% less?) and scores hire in coding and tool use benchmarks (GPT-5.3-Codex trounces Opus 4.6), yet all the coders want to marry Anthropic. I don't think OpenAI understands how to sell, if they even had a product to sell.
reply
agentifysh
47 minutes ago
[-]
I'm not so sure about that. There's a lot of people that were turned off by Anthropic, especially with the weekly usage limits. that in comparison to Codex is on the last side. And actually Codex is one of the few products that I think OpenAI has executed really well on. there's just no real equivalent in terms of actual usage that you can get for the same amount of money. Gemini is great, but it seems to be still in a state of flux. Way too much products stretched too thin. Anthropic is also okay, but it's very limited in the weekly usage you can get out of it.
reply
daxfohl
45 minutes ago
[-]
One trillion capex per year? Does that mean they need everyone on the planet to get $100/yr subscriptions to stay solvent?
reply
ProgrammerMatt
19 minutes ago
[-]
Of course not. It could be half the people on the planet need to pay $200/yr
reply
bdangubic
16 minutes ago
[-]
1/10000th would not pay $20/year
reply
XCSme
1 hour ago
[-]
Same question for Atrophic.

Personally I only see Google (Gemini), X (Grok) and the Chinese models having a chances to still be alive in 1-2 years.

reply
simonw
1 hour ago
[-]
Anthropic are making a very convincing play for business and "enterprise" customers - first with Claude Code and now with Cowork and especially Claude for Excel. The revenue growth they've announced has been extremely impressive over the past year.
reply
XCSme
40 minutes ago
[-]
Well, the public reception seems to be changing, after tweets like this: https://x.com/AnthropicAI/status/2025997928242811253

Also, I liked Anthropic because they were focused a lot on safety, but after the Pentagon stuff, it seems like they dropped their focus on safety.

reply
XCSme
1 hour ago
[-]
Anthropic* lol, unintentional...
reply
anupshinde
1 hour ago
[-]
If you were forced to choose just one of all the competing players, which is "the one" you will use?

For me, the choice is ChatGPT, not for its Codex or other fancy tooling - just the chat. Not that Claude Code or Cowork is less important. Not that I like Codex over Claude Code.

reply
nozzlegear
1 hour ago
[-]
Right now? Claude, so long as they don't fold to the Pentagon's demands. It's important to me that the company at least have a pretense of ethics. If they fold, I may just use open models via DDG – I don't find code assistants very useful for my workflow anyway.
reply
com2kid
2 hours ago
[-]
Sometimes I like to imagine what this would be like if the technology had appeared 25 years ago.

First off, nonetheless open publishing stuff. Everything would have been trade secrets.

Next off no interoperable json apis instead binary APIs that are hard to integrate with and therefore sticky. Once you spent 3 or 4 months getting your MCP server setup, no way would you ever try to change to a different vendor!

The number of investors was much smaller so odds are you wouldn't have seen these crazy high salaries and you wouldn't have people running off to different companies left and right. (I know, .com boom, but the .com boom never saw 500k cash salaries...)

Imagine if Google hadn't published any papers about transformers or the attention paper had been an internal memo or heck just word2vec was only an internal library.

It has all been a net good for technological progress but not that good for the companies involved.

reply
deepfriedbits
2 hours ago
[-]
Could they have even trained the models 25 years ago? Wikipedia was nothing close to what it is today and I know folks here like to mourn the fall of the open web, but it's still orders of magnitude larger today than it was in 2001. YouTube, so many information stores that simply didn't exist then.
reply
hattmall
1 hour ago
[-]
Maybe not 25,but IBM Watson beat humans at Jeopardy over 10 years ago. The technology has been there, the difference is the willingness to burn money on it in hopes of capturing exponential revenue from disrupting industries.

Obviously the costs have come down but if IBM felt like burning 100 Billion in 2012 I'm pretty sure they could have a similarly impressive chat bot. Just not sure how they would have ever recouped the revenue.

reply
com2kid
1 hour ago
[-]
The book archives are a big one as well, all the journals that have been published digitally throughout the 2000s, and all the newspapers.

Though with some types of models (specifically voice) it has been discovered that a smaller high quality dataset is better than a giant dataset filled with errors.

reply
neom
3 hours ago
[-]
Not many folks talking about this: https://www.tomshardware.com/tech-industry/artificial-intell...

The WH has said it hasn't approved any sales, but it's not clear China is buying, and it seem they are making good progress on their huawei ascend chips. If China is basiclly at parity on the full stack (silicon, framework, training, model), and it starts open weighting frontier models at $0.xx/M tokens, then yeah, moat issues all around one would imagine? Not surprised to see Anthropic complaining like this: https://www.anthropic.com/news/detecting-and-preventing-dist... - but I don't know how you go back from it at this point?

reply
danpalmer
3 hours ago
[-]
Not surprising, Nvidia's margin was just a huge incentive for companies/countries to develop their own solutions. You don't have to be 100% as good if you're 80% cheaper. It's unsurprising that this is being driven by Chinese companies/labs who often have a lot less funding than the US, and the big tech companies (Google, Microsoft, Amazon) who will benefit the most from having their own compute.

I've never believed in Nvidia's moat, and it seems OpenAI's moat (research) has gone and surprisingly is no longer a priority for them.

reply
cosmic_cheese
2 hours ago
[-]
It seems like it’s really only China that’s pursuing the route of doing more with smaller/cheaper models, too, which also has a lot of potential to give the whole bubble a good shake.

To me it seems like the most obvious thing to do. More efficient models both make up for whatever you lost by using cheaper hardware and let you do more with the hardware you have than the competition can. By comparison the ever-growing-model strategy is a dead end.

reply
neom
3 hours ago
[-]
Feels a bit crazy saying this but I can imagine a weird future where we have some outlawed Chinese tokens situation under some national security guise. No clue how that would work but nothing surprises me anymore.
reply
samrus
2 hours ago
[-]
Shit is about to get alot more cyberpunk than we're used to, thats for sure
reply
nsoonhui
3 hours ago
[-]

  it seem they are making good progress on their huawei ascend chips
This is interesting to me. I thought that the reason for deepseek delay was because of the insistence ( by the politicians) to use huawei chip[0]. But that was last year August.

Anything changes in between?

[0]: https://www.reuters.com/world/china/deepseeks-launch-new-ai-...

reply
neom
3 hours ago
[-]
GLM-5 was trained entirely^ on Huawei Ascend chips. https://z.ai/blog/glm-5 / https://tech.yahoo.com/ai/articles/chinas-ai-startup-zhipu-r...

(^edit, I don't know for certain entirely is accurate - edit again, found a chinese source saying their image model is end to end ascend, or at least, domestic: https://zhuanlan.zhihu.com/p/1994775762516080044 & https://www.guancha.cn/economy/2026_02_12_806895.shtml)

reply
SXX
2 hours ago
[-]
And evdn this information might be not very reliable because both US and China government wouldnt be happy about fact that some models might happen to be trained on some "shadow datacenter" full of Nvidia GPUs.
reply
re-thc
2 hours ago
[-]
China doesn't need to buy it. They can continue their policy and look good.

They've already found a better route. Buy it elsewhere e.g. in Singapore. Train their models there using Nvidia hardware.

Ship the result and fine tune back in China.

So "China" is and has always been buying it. No difference. The politics can keep raging.

reply
Buttons840
2 hours ago
[-]
Tech companies are one of the jewels in America's (USA's) crown. If we build a bunch of huge AI companies, rivals will probably continue to release open AI models which undermine the US's influence in the world.
reply
johnfn
3 hours ago
[-]
This article is significantly better written than most anti-OpenAI/AI articles, and for that I am really grateful. I am generally an AI booster (lol), so I am happy to read well-considered thought pieces from people who disagree with me.

That being said...

> The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day.

This really props up the whole argument, because the author goes on to say that OpenAI's users are not really engaged. But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.

Moving on to another section:

> If the next step is those new experiences, who does that, and why would it be OpenAI? The entire tech industry is trying to invent the second step of generative AI experiences - how can you plan for it to be you? How do you compete with this chart - with every entrepreneur in Silicon Valley?

Er, are any of these startups training foundation models? No? Then maybe that is how you compete? I suppose the author would say that the foundation model isn't doing much for OpenAI's engagement metrics (and therefore revenue), but I am not sure I agree there.

Still, really good article. I think it really crystalizes the anti-OpenAI argument and it gives me a lot of interesting things to think about.

reply
dijksterhuis
2 hours ago
[-]
> What percentage of Meta's users are paying? Google's?

The advertiser based business model for those companies makes your question/thought process here problematic for me. Historically speaking Google and "Meta" (Facebook) were primarily advertising provider companies. They provided billboards (space and time on the web page in front of an end-user) to people who were willing to buy tht space and time on the billboard. The "free access" end-users would always end up seeing said billboards, which is how they ended up "paying" for the service.

So most of Meta/Google end-users were "paying" users. They were being subsidised by the advertising customers paying for the end-users (who were forced to view adverts). The end-users paid with interruption to the service by an advert. [0]

In that context it feels a little like you're comparing apples to dave's left foot, as OpenAI hasn't had that with advertising ............ historically [1].

--

[0]: yes ad-blockers, yes more diverse revenue income streams over the years like with phones, yes this is simplified yadayada

[1]: excluding government etc. ~bailouts~ investments as not the same as advertising subsidies, but you could argue it's doing the same thing

reply
johnfn
2 hours ago
[-]
Yes -- but both Google and Meta didn't start off as an advertising company - they started off providing a service a lot of people liked, and then eventually added ads to it. My assumption (somewhat implicit, admittedly) is that there's no reason OpenAI couldn't do the same. I can understand why that might be controversial, though.

But honestly, if OpenAI can't figure out ads given all their data and ability, they deserve to fail. :P

reply
chipgap98
2 hours ago
[-]
But OpenAI has more serious competition than those others did when they were coming up. That puts pressure on them to figure out ads and they dragged their feet getting started
reply
benedictevans
52 minutes ago
[-]
You’ve missed the point completely - if the important experiences are things built on top of foundation models, where the model itself is just an API call, then you don’t need to have a foundation model for build them and the model is just commodity infra
reply
johnfn
14 minutes ago
[-]
Yes, but OpenAI has 900M+ user reach, plus staggering amounts of cash, plus early access + deep integration with the latest and greatest models. I hardly think that is tantamount to "just an API call".
reply
wesammikhail
2 hours ago
[-]
> But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.

The difference is in the unit economics. OpenAI has to spend massively per free user it serves. The others you mentioned have SaaS economics where the marginal cost of onboarding and serving each non-paying user is essentially zero while also gaining money from these free users via advertising. Hence, the free users are actually a net positive rather than an endless money sink.

Keep also in mind that AI has always been, and will always be, a commodity. The moment you start forcing people to convert into paying customers is the moment they jump ship at scale.

Just something to keep in mind.

reply
system2
2 hours ago
[-]
This is confirmation bias. HN and other tech people are focusing on the programming aspect of AI more than anything else. The average user does not use it for that, and they don't care. ChatGPT became something like Kleenex.
reply
kdheiwns
1 hour ago
[-]
Kleenex was exactly what I had in mind when reading other comments. And just like Kleenex, where people use whatever tissue they find and forget the word "tissue" even exists, ChatGPT seems to be becoming a genericized term that just means "AI chatbot."
reply
d--b
2 hours ago
[-]
Worth noting that it’s not a winner-takes all situation. There’s definitely space for differentiation.

Anthropic is in favor with developers and generally tech people, while OpenAi / Gemini are more commonly used by regular folks. And Grok, well, you know…

We have yet to see who’s winning in the “creative space”, probably OpenAI.

As these positionings cristallize, each company is likely going to double down on their user’s communities, like Apple did when specifically targeting creative/artsy people, instead of cranking general models that aren’t significantly better at anything.

reply
system2
2 hours ago
[-]
I categorize it like this:

Claude: Programmers

ChatGPT: LGBTQ/Liberals, with a lot of censorship

Grok: Joe Rogan

reply
ftchd
1 hour ago
[-]
DeepSeek: Jìan-Yáng
reply
re-thc
2 hours ago
[-]
Wasn't OpenAI's moat buying up all the RAM or Nvidia cards?
reply
mrcwinn
26 minutes ago
[-]
To say "except for distribution" OpenAI has few advantages is like saying "except for location" this retail store really doesn't stand a chance.
reply
boxingdog
2 hours ago
[-]
The main problem with OpenAI/Anthropic is that their only moat is their models, and it has been proven that you can clone a model through distillation. Although the performance is not exactly the same, it gets very close to the original.
reply