Google, Nvidia, and OpenAI
131 points
12 hours ago
| 13 comments
| stratechery.com
| HN
RoddaWallPro
10 hours ago
[-]
"advertising would make ChatGPT a better product."

And with that, I will never read anything this guy writes again :)

reply
biophysboy
10 hours ago
[-]
I like and read Ben's stuff regularly; he often frames "better" from the business side. He will use terms like "revealed preference" to claim users actually prefer bad product designs (e.g. most users use free ad-based platforms), but a lot of human behavior is impulsive, habitual, constrained, and irrational.
reply
RoddaWallPro
7 hours ago
[-]
I agree that is what he is doing, but I can also justify adding fentanyl to every drug sold in the world as "making it better" from a business perspective, because it is addictive. Anyone who ignores the moral or ethical angle on decisions, I cannot take seriously. It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't. So don't say stupid shit like that, be a human being and use your brain and capacity to look at things and analyze "is this good for human society?".
reply
chii
51 minutes ago
[-]
> It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't.

it is, for the agents of the shareholders. As long as the actions of those agents are legal of course. That's why it's not legal to put fentanyl into every drug sold, because fentanyl is illegal.

But it is legal to put (more) sugar and/or salt into processed foods.

reply
dozerly
33 minutes ago
[-]
No, it’s not. The government, and laws by proxy, will never keep up with people’s willingness to “maximize shareholder value” and so you get harmful, future-illegal practices. Reagan was “maximizing shareholder value”, and now look where the US is.
reply
chii
13 minutes ago
[-]
you have to show this 'future-illegal' action is harmful first by demonstrating harm.

That's why i used the sugar example - it's starting to be demonstrably harmful in large quantities that are being used.

I am against preventative "harmful" laws, when harm hasn't been demonstrated, as it restricts freedom, adds red tape to innovation, and stifles startups from exploring the space of possibilities.

reply
biophysboy
7 hours ago
[-]
I agree - I think Ben tends to get business myopia. I read him with that in mind.
reply
Cheer2171
10 hours ago
[-]
To an MBA type, addictive drugs are the best products. They reveal people's latent preferences for being desperately poor and dependent. They see a grandma pouring her life savings into a gambling app and think "How can I get in on this?"
reply
biophysboy
10 hours ago
[-]
I think its more subtle; they fight for regulations they deem reasonable and against those they deem unreasonable. Anything that curtails growth of the business is unreasonable.
reply
wubrr
1 hour ago
[-]
Which is entirely unreasonable, and there's no need to make excuses or explain away this borderline psychopathy.
reply
bloppe
9 hours ago
[-]
To be fair, businesses should assume that customers actually "want" what they create demand for. In the case of misleading or dangerously addictive products, regulation should fall to government, because that's the only actor that can prevent a race to the bottom.
reply
gmd63
9 hours ago
[-]
The folks who succeed most in business are the type who have an intuition for what's best. They're not some automaton reading too far into and amplifying the imperfect and shallow signals of "demand" in a marketplace.
reply
baobabKoodaa
8 hours ago
[-]
Because all people everywhere are psychopaths who will stab you for $5 if they can get away with it? If you take that attitude, why even go to "work" or run a "business"? It'd be so much more efficient to just stab-stab-stab and take the money directly.
reply
chii
48 minutes ago
[-]
> It'd be so much more efficient to just stab-stab-stab and take the money directly.

which is exactly what the law of the jungle is. And guess who sits at the top within that regime?

Humans would devolve back into that, if not for the violence enforcement from the state. Therefore, it is the responsibility of the state to make sure regulations are sound to prevent the stab-stab-stab, not the responsibility of the individual to not take advantage of a situation that would have been advantageous to take.

reply
bloppe
1 hour ago
[-]
I'll indulge your straw man because it's actually pretty good at illustrating my point. 99.9% of people are not psychopaths. But you only need .1% of people to be psychopaths. In a world where you get $5 and no threat of prosecution for stabbing people, you can bet that there will be extremely efficient and effective stabbing companies run by those psychopaths. Even normal people who don't like stabbing others would see the psychopaths getting rich and think to themselves "well, everyone's getting stabbed anyway, I might as well make some money too". That's what a race to the bottom is.

And that's why the government regulates stabbing.

reply
runarberg
1 hour ago
[-]
In the behavioral science (of which economics should be a sub-field of) this is called perverse intensives. A core-feature of capitalism, is that if you don‘t abandon your morals and maximize your profits at somebody else’s expense, you will soon be out-competed by those who will.
reply
lmm
2 hours ago
[-]
> Because all people everywhere are psychopaths who will stab you for $5 if they can get away with it?

Not all people everywhere, but most successful businesspeople.

> It'd be so much more efficient to just stab-stab-stab and take the money directly.

It isn't though? If you do that then you get locked up and lose the money, so the smart psychopaths go into business instead.

reply
mistrial9
9 hours ago
[-]
To be fair, organized predatory behavior is to be expected?

joke- The World Council of Animals meeting completes with morning sessions with "OK great, now who is for lunch?"

reply
an0malous
10 hours ago
[-]
If you liked that, you'll enjoy his take on how, actually, bubbles are good: https://stratechery.com/2025/the-benefits-of-bubbles/
reply
matwood
9 hours ago
[-]
And he's right (and the sources he points out), that some bubbles are good. They end up being a way to pull in a large amount of capital to build out something completely new, but still unsure where the future will lead.

A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing.

reply
foogazi
9 hours ago
[-]
The bubble argument was hard to wrap my head around

It sounded vaguely like the broken window fallacy- a broken window creating “work”

Is the value of bubbles in the trying out new products/ideas and pulling funds from unsuspecting bag holders?

Otherwise it sounds like a huge destruction of stakeholder value - but that seems to be how venture funding works

reply
tim333
5 hours ago
[-]
The usual argument is the investment creates value beyond that captured by the investors so society is better off. Like investors spend $10 bn building the internet and only get $9 bn back but things like Wikipedia have a value to society >$1 bn.
reply
20after4
9 hours ago
[-]
Huge DCs and Power Generation might be useful, long-lasting infrastructure, however, the racks full of GPUs and TPUs will depreciate rather quickly.
reply
sdenton4
9 hours ago
[-]
I think this is a bit overblown.

In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster...

reply
raw_anon_1111
9 hours ago
[-]
The problem is the failure rate of GPUs is extremely high
reply
RoddaWallPro
7 hours ago
[-]
I _kind of_ understand this one. You can think of a bubble as a market exploring a bunch of different possibilities, a lot of which may not work out. But the ones that do work out, they may go on to be foundational. Sort of like startups: you bet that most of them will fail, but that's okay, you're making bets!

The difference of course is that when a startup goes out of business, it's fine (from my perspective) because it was probably all VC money anyway and so it doesn't cause much damage, whereas the entire economy bubble popping causes a lot of damage.

I don't know that he's arguing that they are good, but rather that _some_ kinds of bubbles can have a lot of positive effects.

Maybe he's doing the same thing here, I don't know. I see the words "advertising would make X Product better" and I stop reading. Perhaps I am blindly following my own ideology here :shrug:.

reply
Groxx
10 hours ago
[-]
yeah... and it's (partly) based on the claim that it has network effects like how Facebook has? I don't see that at all, there's basically no social or cross-account stuff in any of them and if anything LLMs are the best non-lock-in system we've ever had: none of them are totally stable or reliable, and they all work by simply telling it to do the thing you want. your prompts today will need tweaking tomorrow, regardless of if it's in ChatGPT or Gemini, especially for individuals who are using the websites (which also keep changing).

sure, there are APIs and that takes effort to switch... but many of them are nearly identical, and the ecosystem effect of ~all tools supporting multiple models seems far stronger than the network effect of your parents using ChatGPT specifically.

reply
stanfordkid
9 hours ago
[-]
I’d argue that AI apis are nearly trivial to switch… the prompts can largely stay the same, and function calling pretty similar
reply
claw-el
10 hours ago
[-]
Ben Thompson is a content creator. Even if Ben’s content does not directly benefit from ads, it is the fact that other content creator’s content having ads is what makes Ben’s content premium in comparison.

I would say that, on this topic (ads on internet content), Ben Thompson may not be as objective a perspective as he has on other topics.

reply
raw_anon_1111
8 hours ago
[-]
People aren’t collectively paying him between $3 million a year and five million (estimated 40k+ subscribers paying a minimum of $120 a year) just because he doesn’t have ads.
reply
kaishin
5 hours ago
[-]
That take was such bad taste. I get where he's coming from, and I don't like it one bit.
reply
bambax
9 hours ago
[-]
The problem with ads in AI products is, can they be blocked effectively?

If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far).

But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult.

reply
nowittyusername
9 hours ago
[-]
I realized that ads within context were going to be an issue a while ago so to combat this i started building my own solution for this which spiraled in to a local based agentic system with a different bigger goal then the simple original... Anyways, the issue you are describing is not that difficult to overcome. You simply set a local llm model layer before the cloud based providers. Everything goes in and out through this "firewall". The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information. I've tested exactly this interaction and it works just fine. i think these types of systems will be the future of "ad block" . As people start using agentic systems more and more in their daily lives it will become crucial that they pipe all of the inputs and outputs through a local layer that has that humans best interests in mind. That's why my personal project expanded in to a local agentic orchestrator layer instead of a simple "firewall". i think agentic systems using other agentic systems are the future.
reply
bambax
6 hours ago
[-]
But don't you need some kind of AI to filter out the replies? And if you do, isn't it simpler to just use a local model for everything, instead of having a local AI proxy?
reply
nowittyusername
4 hours ago
[-]
The local llm is the filter so yes you need one. and its not simpler to have the local llm do everything because the local llm has a lot of limitations like speed, intelligence and other issues. the smart thing to do is delegate all of the personal stuff to the local model, and have it delegate the rest to smarter and faster models and simply parrot back to you what they found. this also has the benefit of saving on context among many other advantages.
reply
foogazi
8 hours ago
[-]
> i started building my own solution

How much ?

reply
nowittyusername
4 hours ago
[-]
how much it cost me? well i been thinking about it for a long time now, probably 9 months. bought myself claude code and started working on some prototypes and other projects like building my own speech to text and other goodies like automated benchmarking solutions to familiarize myself with fundamentals. but finally started the building process about 2 months ago and all it cost me was a boatload of time and about 50 bucks a month in agentic coding subscriptions. but it hasnt been a simple filter for a long time now. now its a very complex agentic harness system. lots of very advanced features that allow for tool calling, agent to agent interaction, and many other goodies
reply
chii
45 minutes ago
[-]
> But if "ads" are woven into the responses in a manner that could be more or less subtle

do you realize how much product placement have been in movies since...well, the existence of movies?

reply
alecco
10 hours ago
[-]
Indeed. Why do people follow these clowns? They seem to read high level takes and spew out their nonsense theories.

They fail to mention Google's edge: Inter-Chip Interconnect and the allegedly 1/3 of price. Then they talk about software moat and it sounds like they never even compiled a hello world in either architecture. smh

And this comes out days after many in-depth posts like:

https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...

A crude Google search AI summary of those would be better than this dumb blogpost.

reply
ebiester
10 hours ago
[-]
Why? It turns out that I try to read people who have a different perspective than I do. Why am I trying to read everything that just confirms my current biases?

(Unless those writings are looking to dehumanize or strip people of rights or inflame hate - I'm not talking about propaganda or hate speech here.)

reply
Teever
9 hours ago
[-]
Personally when I go to the grocery store I pick fruits and vegetables that are ripe or are soon to be ripe, and I stay away from meat that is close to expiration or has an off putting appearance or odour to it.

With that said there's no accounting for taste.

reply
raw_anon_1111
8 hours ago
[-]
You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter? He has had every major tech CEO on his podcast and he is credited for being the inspiration for Substack.

The Substack founders unofficially marketed it early on as “Stratechery for independent authors”.

Your analysis concerning the technology instead of focusing on the business is about like Rob Malda not understanding the success of the “no wireless, less space than the Nomad lame”.

Even if you just read this article, he never argued that Google didn’t have the best technology, he was saying just the opposite. Nvidia is in good shape precisely because everyone who is not Google is now going to have to spend more on Nvidia to keep up.

He has said that AI may turn out to be a “sustaining innovation” first coined by Clay Christenson and that the big winners may be Google, Meta, Microsoft and Amazon because they can leverage their pre-existing businesses and infrastructure.

Even Apple might be better off since they are reportedly going to just throw a billion at Google for its model.

reply
lmm
2 hours ago
[-]
> You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter?

The belief that adding ads makes things better would be an extremely convenient belief for a writer to have, and I can easily see how that could result in them getting more revenue than other writers. That doesn't make it any less dumb.

reply
raw_anon_1111
2 hours ago
[-]
At at least $5 million in paid subscriptions annually and living between Wisconsin and Taiwan, as an independent writer do you really think he needs to juice his subscriptions by advocating other people do ads on an LLM?

Any use of LLMs by other people reduces his value.

reply
specialist
1 hour ago
[-]
A lucky few can make good money telling rich people what they want to hear.

eg Yuval Noah Harari, Bari Weiss, Matthew Yglesias

reply
specialist
1 hour ago
[-]
> spew out their nonsense theories

Discussing "innovator's dilemma" unironically is a fullstop for me.

reply
oblio
5 minutes ago
[-]
Why? The book describes a common real life business situation and explains it really well.
reply
javcasas
8 hours ago
[-]
"advertising in ChatGPT would make DeepSeek/Qwen/<other AI> a better product"

There, fixed.

reply
spyckie2
10 hours ago
[-]
A better product to make money of course.
reply
cowpig
9 hours ago
[-]
Ben Thompson is a sharp guy who can't see the forest for the trees. Nor most of the trees. He can only see the three biggest trees that are fighting over the same bit of sunlight.
reply
empath75
10 hours ago
[-]
I am not 100% sure this is wrong?

I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue.

reply
jeromegv
8 hours ago
[-]
It's likely they already make money on affiliates, but this is different, ads are product placement.
reply
matwood
10 hours ago
[-]
ChatGPT has recently been linking me directly to Amazon or other stores to buy what I'm researching.
reply
yunohn
10 hours ago
[-]
Sure, but affiliate != ads. Rather, both affiliate links and paid ad slots are by definition not neutral and thus bias your results, no matter what anyone claims.
reply
sho_hn
10 hours ago
[-]
"Better product" here means "monetizes harder". You just have a different concept of product quality than hardline-capitalist finance bros.
reply
dandanua
9 hours ago
[-]
better product = inflicting more suffering while generating more revenue
reply
tyre
8 hours ago
[-]
> OpenAI’s refusal to launch and iterate an ads product for ChatGPT — now three years old — is a dereliction of business duty, particularly as the company signs deals for over a trillion dollars of compute.

I think this is intentional by Altman. He’s a salesman, after all. When there is infinite possibility, he can sell any type of vision of future revenue and margins. When there are no concrete numbers, It’s your word against his.

Once they try to monetize, however, he’s boxed in. And the problem with OpenAI vs. Google in the earlier days is that he needs money and chips now. He needs hundreds of billions of dollars. Trillions of dollars.

Ad revenue numbers get in the way. It will take time to optimize; you’ll get public pushback and bad press (despite what Ben writes, ads will definitely not be a better product experience.)

It might be the case that real revenue is worse than hypothetical revenue.

reply
antiloper
8 hours ago
[-]
Absolute silicon valley logic: https://www.youtube.com/watch?v=BzAdXyPYKQo
reply
dehrmann
1 hour ago
[-]
Pre-revenue gets weird when you're in the cuatro commas club.
reply
gundmc
7 hours ago
[-]
"If you show revenue, people will ask 'HOW MUCH?' and it will never be enough. The company that was the 100xer, the 1000xer is suddenly the 2x dog. But if you have NO revenue, you can say you're pre-revenue! You're a potential pure play... It's not about how much you earn, it's about how much you're worth. And who is worth the most? Companies that lose money!"
reply
tyre
5 hours ago
[-]
I’m not advocating for it! But it’s real.
reply
raw_anon_1111
10 hours ago
[-]
I do all of my “AI” development on top of AWS Bedrock that hosts every available model except for OpenAIs closed source models that are exclusive to Microsoft.

It’s extremely easy to write a library that makes switching between models trivial. I could add OpenAI support. It would be just slightly more complicated because I would have to have a separate set of API keys while now I can just use my AWS credentials.

Also of course latency would be theoretically worse since with hosting on AWS and using AWS for inference you stay within the internal network (yes I know to use VPC endpoints).

There is no moat around switching models unlike Ben says.

reply
bambax
9 hours ago
[-]
openrouter.ai does exactly that, and it lets you use models from OpenAI as well. I switch models often using openrouter.

But, talk to any (or almost any) non-developer and you'll find they 1/ mostly only use ChatGPT, sometimes only know of ChatGPT and have never heard of any other solution, and 2/ in the rare case they did switch to something else, they don't want to go back, they're gone for good.

Each provider has a moat that is its number of daily users; and although it's a little annoying to admit, OpenAI has the biggest moat of them all.

reply
redwood
2 hours ago
[-]
All google needs to do is bite the bullet on the cost and flip core search to AI and immediately dominate the user count. They can start by focusing first on questions that get asked in Google search. Boom
reply
raw_anon_1111
1 hour ago
[-]
Core search has been using “AI” since they basically deprioritized PageRank.

I think the combination of AI overviews and a separate “AI mode” tab is good enough.

reply
raw_anon_1111
9 hours ago
[-]
Non developers using Chatbots and being willing to pay is never going to be as big as the enterprise market or BigTech using AI in the background.

I would think that Gemini (the model) will add profit to Google way before OpenAI ever becomes profitable as they leverage it within their business.

Why would I pay for openrouter.ai and add another dependency? If I’m just using Amazon Bedrock hosted models, I can just use the AWS SDK and change the request format slightly based on the model family and abstract that into my library.

reply
bambax
6 hours ago
[-]
You don't need openrouter if you already have everything set up in your own AWS environment. But if you don't, openrouter is extremely straightforward, just open an account and you're done.
reply
EmiDub
4 hours ago
[-]
How is the number of users a moat when you are losing money on every user?
reply
raw_anon_1111
3 hours ago
[-]
A moat involves switching costs for users. It’s not related to profitability
reply
spruce_tips
8 hours ago
[-]
I agree there is no moat to the mechanics of switching models i.e. what openrouter does. But it's not as straightforward as everyone says to switch out the model powering a workflow that's been tuned around said model, whether that tuning was purposeful or accidental. It takes time to re-evaluate that new model works the same or better than old model.

That said, I don't believe oai's models consistently produce the best results.

reply
raw_anon_1111
8 hours ago
[-]
You need a way to test model changes regardless as models in the same family change. Is it really a heavier lift to test different model families than it is to test going from GPT 3.5 to GPT 5 or even as you modify your prompts?
reply
spruce_tips
6 hours ago
[-]
no, i dont think it's a heavier lift to test different model families. my point was that swapping models, whether that's to different model families or to new versions in the same model family, isn't straightforward. i'm reluctant to both upgrade model versions AND to swap model families, and that in itself is a type of stickiness that multiple model providers have.

maybe another way of saying the same thing is that there is still a lot of work to make eval tooling a lot better!

reply
DenisM
53 minutes ago
[-]
Continuous eval is unavoidable even absent model changes. Agents are keeping memories, tools evolve over time, external data changes, new exploits are being deployed, partner agents do get upgraded.

Theres too much entropy in the system. Context babysitting is our future.

reply
biophysboy
10 hours ago
[-]
Have you noticed any significant AND consistent differences between them when you switch? I frequently get a better answer from one vs the other, but it feels unpredictable. Your setup seems like a better test of this
reply
raw_anon_1111
10 hours ago
[-]
For the most part, I don’t do chatbots except for a couple of RAG based chatbots. It’s more behind the scenes stuff like image understanding, categorization, nuanced sentiment analsys, semantic alignment, etc.

I’ve created a framework that lets me test the quality in automated way between prompt changes and models and I compare costs/speed/quality.

The only thing that requires humans to judge the qualify out of all those are RAG results.

reply
biophysboy
10 hours ago
[-]
So who is the winner using the framework you created?
reply
raw_anon_1111
10 hours ago
[-]
It depends. Amazon’s Nova Light gave me the best speed vs performance when I needed really quick real time inference for categorizing a users input (think call centers).

One of Anthropics models did the best with image understanding with Amazon’s Nova Pro being slightly behind.

For my tests, I used a customer’s specific set of test data.

For RAG I forgot. But is much more subjective. I just gave the customer an ability to configure the model and modify the prompt so they could choose.

reply
biophysboy
10 hours ago
[-]
Your experience matches mine then... I haven't noticed any clear, consistent differences. I'm always looking for second opinions on this (bc I've gotten fairly cynical). Appreciate it
reply
kevstev
8 hours ago
[-]
checkout https://poe.com - it does the same thing. I agree with your assessment though, while you can get better answers from some models than others, being able to predict in advance which model will give you the better answer is hard to predict.
reply
jasonjmcghee
11 hours ago
[-]
Idk if I'm just holding it wrong, but calling Gemini 3 "the best model in the world" doesn't line up with my experience at all.

It seems to just be worse at actually doing what you ask.

reply
cj
11 hours ago
[-]
It's like saying "Star Wars is the best movie in the world" - to some people it is. To others it's terrible.

I feel like it would be advantageous to move away from a "one model fits all" mindset, and move towards a world where we have different genres of models that we use for different things.

The benchmark scores are turning into being just as useful as tomatometer movie scores. Something can score high, but if that's not the genre you like, the high score doesn't guarantee you'll like it.

reply
everdrive
10 hours ago
[-]
Outside of experience and experimentation, is there a good way to know what models are strong for what tasks?
reply
grahamplace
10 hours ago
[-]
reply
jasonjmcghee
10 hours ago
[-]
Unless you overfit to benchmark style scenarios and are worse for real-world use.
reply
jpollock
10 hours ago
[-]
Not really, it's like asking which C compiler was best back in the 90s.

You had Watcom, Intel, GCC, Borland, Microsoft, etc.

They all had different optimizations and different target markets.

Best to make your tooling model agnostic. I understand that tuned prompts are model _version_ specific, so you will need this anyways.

reply
matwood
10 hours ago
[-]
What I like most about Gemini is it's perfectly happy to say what I asked it to proofread or improve is good as it is. Never has ChatGPT said, 'this is good to go', even its own output that it just said was good to go.
reply
wrsh07
10 hours ago
[-]
It's a good model. Zvi also thought it was the best model until Opus 4.5 was announced a few hours after he wrote his post

https://thezvi.substack.com/p/gemini-3-pro-is-a-vast-intelli...

reply
cs702
10 hours ago
[-]
The analysis fails to mention that if TPUs take market share from Nvidia GPUs, JAX's software ecosystem likely would also take market share from the PyTorch+Triton+CUDA software ecosystem.
reply
claytonjy
10 hours ago
[-]
not even google thinks this will happen, given their insistence on only offering TPU access through their cloud
reply
cs702
10 hours ago
[-]
As the OP points out, Google is now selling TPUs to at least some corporate customers.
reply
hackernewds
10 hours ago
[-]
they are not though
reply
martin_drapeau
9 hours ago
[-]
Most analysts seem to forget what actual consumers do. Normal people use ChatGPT. They accidentally use Gemini when they Google something. But I don’t know anyone non-technical who has ditched ChatGPT as their default LLM. For 99% of questions these days, it’s plenty good enough—there’s just no real reason to switch.

OpenAI's strategy is to eventually overtake search. I'd be curious for a chart of their progress over time. Without Google trying to distort the picture with Gemini benchmark results and usage stats which are tainted by sheer numbers from traditional search and their apps.

reply
msabalau
9 hours ago
[-]
We can see what consumers do. The Gemini app is second most downloaded app for the iPhone, right behind OpenAI. Apple is certainly not trying to "distort the picture" as you evidently wish to believe that Google is doing.

That's hardly an indication that actual "non-technical" consumers don't care, or that there is any sort of barrier to either using both apps or using whichever is better at the moment, or whichever is more helpful in generating the meme of the moment.

If it were actually true that OpenAI was "plenty good enough" for 99% of questions that people have, and that "there is no reason to switch" then OpenAI could just stop training new models, which is absurdly expensive. They aren't doing that, because they sensibly believe that having better models matters to consumers.

reply
knallfrosch
9 hours ago
[-]
> usage stats which are tainted by sheer numbers from traditional search and their apps.

You're looking at this backwards. Being able to push Gemini into your face on Gmail, Gdocs, Google Search, Android, Android TV, Android Auto and Pixel devices sure is: Annoying, disruptive and unfair. But market-wise., it sure is a strength, not a weakness.

reply
raw_anon_1111
8 hours ago
[-]
And it is “fair” that a company can gain market share while losing billions backed by VC funding?
reply
raw_anon_1111
8 hours ago
[-]
Yes and more normal people use Google - that is the default search engine for Android and iOS. AI overviews and AI mode just have to be good enough to cause people not to switch.

Google’s increasing revenues and profits and even Apple hinting at they aren’t seeing decreased revenue from their affiliation with Google hints at people not replacing Google search with ChatGPT.

Besides end user chatbot use is just a small part of the revenue from LLMs.

reply
nikcub
8 hours ago
[-]
> But I don’t know anyone non-technical who has ditched ChatGPT as their default LLM.

Google are giving away a year of Gemini Pro to students, which has seen a big shift. The FT reported today[0] that Gemini new app downloads are almost catching up to ChatGPT

[0] https://www.ft.com/content/8881062d-ff4f-4454-8e9d-d992e8e2c...

reply
bloppe
9 hours ago
[-]
I don't think that's a distorted picture at all. Google is still handling billions of searches per day. A huge number of those include AI answers. To all those billions of people who still reach for the omnibar first, Gemini is becoming their LLM of first resort.
reply
bambax
9 hours ago
[-]
I like Google Search for simple searches and still use it all the time. But for "complex" searches that are more like research, ChatGPT is actually pretty good, and provides actual, working links whereas Gemini seems to hallucinate more (in my experience).
reply
diavarlyani
10 hours ago
[-]
2018 me: ‘Aggregation Theory is basically unbeatable’ 2025 me, watching OpenAI voluntarily stay in the top-right quadrant while Google happily camps bottom-left with infinite ammo: ‘…maybe there’s an asterisk’ Great update to the Moat Map
reply
stanfordkid
9 hours ago
[-]
I agree with his take on Googles enormous strategic advantages.

I think he’s wrong that OpenAI can win this by upping the revenue engine through ads or through building a consumer behavior moat.

At the end of the day these are chat bots. Nobody really cares about the url and the interface is simple. Google won search by having deeply superior search algorithms and capitalizing on user traffic data to improve and refine those algorithms. It didn’t win because of AdWords … it just got rich that way.

The AI market is an undifferentiated oligopoly (IMO) and the only way to win is by having better algos trained on more data that give better results. Google can win here. It is already winning on video and image generation.

I actually think OpenAI is (wrongly) following Ben’s exact advice — going to the edge and consumer interface through things like the acquisition of things like Jony Ives device company. This is a failing move and an area where Google can also easily win with Android. I agree with Ben that upping the revenue makes sense but they can’t do it at the cost of user experience. Too much at stake.

reply
outside1234
8 hours ago
[-]
Also, it is not like OpenAI is going to go and build ads infrastructure overnight. Google has DECADES of experience with this.
reply
mackross
8 hours ago
[-]
An often overlooked extra advantage to Google is their massive existing ad inventory. If LLMs do end up being ad supported and both products are roughly the same, Google wins. The large supply of ads direct from a diverse set of advertisers means they can fill more ad slots with higher quality ads, for a higher price, and at a lower cost. They’re also already staffed with an enormous amount of talent for ad optimization. Just this advantage would translate into higher sustained margins (even assuming similar costs), but given TPU it might be even greater. This plus the gobs of cash they already spin off, and their massive war chest means they can spend an ungodly amount on user acquisition. It’s their search playbook all over again.
reply
mackross
8 hours ago
[-]
An often overlooked extra advantage to Google is their massive existing ad inventory. If LLMs do end up being ad supported and both products are roughly the same, Google wins. The large supply of ads direct from a diverse set of advertisers means they can fill more ad slots with higher quality ads, for a higher price, and at a lower cost. They’re also already staffed with an enormous amount of talent for ad optimization. Just thus advantage would translate into higher sustained margins (even assuming similar costs), but given TPU it might be even greater. This plus the gobs of cash they already spin off, and their massive war chest means they can spend an ungodly amount on user acquisition. It’s their search playbook all over again.
reply
mackross
5 hours ago
[-]
Guess my edit didn’t work…
reply
dismalaf
9 hours ago
[-]
At this point it's not even OpenAI vs Google. It's OpenAI vs themselves. They're burning through more money making the models than they can realistically hope to make. When their investors decide they've burned through enough money it's basically over.

Google's revenue stream and structural advantages mean they can continue this forever and if another AI winter comes, they can chill because LLM-based AI isn't even their main product.

reply
aworks
11 hours ago
[-]
"the naive approach to moats focuses on the cost of switching; in fact, however, the more important correlation to the strength of a moat is the number of unique purchasers/users."
reply
esafak
11 hours ago
[-]
I was not able to find any research that posits that moat strength is determined by customer diversity.

I think customer diversity correlates instead with resilience.

reply
caminante
10 hours ago
[-]
Author isn't non-financial, but the "moat 2.0" doesn't feel right.

> More than anything, though, I believe in the market power and defensibility of 800 million users, which is why I think ChatGPT still has a meaningful moat.

It's 800M weekly active users according to ChatGPT. I keep hearing that once you segment paid and unpaid, daily ChatGPT users fall off dramatically (<10% for paid and far less for unpaid).

reply
Jyaif
10 hours ago
[-]
I would say that customer diversity may be a marker of past resilience, and likely results in moat.

Customer diversity says nothing about current or future resilience.

reply
citizenpaul
10 hours ago
[-]
Its a long article and one of the first points "google strikes back." Is completely wrong ime. Not only is Gemini much worse than all the other models. The latest release is now so bad it is almost useless half the time or more. Hard to read more with such a bad take what I've seen myself. I don't care what benchmarks it beats if it just churns out comically bad results to me.
reply
Crash0v3rid3
7 hours ago
[-]
Mind sharing some examples of bad results you've seen vs other LLMs?
reply