And with that, I will never read anything this guy writes again :)
it is, for the agents of the shareholders. As long as the actions of those agents are legal of course. That's why it's not legal to put fentanyl into every drug sold, because fentanyl is illegal.
But it is legal to put (more) sugar and/or salt into processed foods.
That's why i used the sugar example - it's starting to be demonstrably harmful in large quantities that are being used.
I am against preventative "harmful" laws, when harm hasn't been demonstrated, as it restricts freedom, adds red tape to innovation, and stifles startups from exploring the space of possibilities.
which is exactly what the law of the jungle is. And guess who sits at the top within that regime?
Humans would devolve back into that, if not for the violence enforcement from the state. Therefore, it is the responsibility of the state to make sure regulations are sound to prevent the stab-stab-stab, not the responsibility of the individual to not take advantage of a situation that would have been advantageous to take.
And that's why the government regulates stabbing.
Not all people everywhere, but most successful businesspeople.
> It'd be so much more efficient to just stab-stab-stab and take the money directly.
It isn't though? If you do that then you get locked up and lose the money, so the smart psychopaths go into business instead.
joke- The World Council of Animals meeting completes with morning sessions with "OK great, now who is for lunch?"
A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing.
It sounded vaguely like the broken window fallacy- a broken window creating “work”
Is the value of bubbles in the trying out new products/ideas and pulling funds from unsuspecting bag holders?
Otherwise it sounds like a huge destruction of stakeholder value - but that seems to be how venture funding works
In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster...
The difference of course is that when a startup goes out of business, it's fine (from my perspective) because it was probably all VC money anyway and so it doesn't cause much damage, whereas the entire economy bubble popping causes a lot of damage.
I don't know that he's arguing that they are good, but rather that _some_ kinds of bubbles can have a lot of positive effects.
Maybe he's doing the same thing here, I don't know. I see the words "advertising would make X Product better" and I stop reading. Perhaps I am blindly following my own ideology here :shrug:.
sure, there are APIs and that takes effort to switch... but many of them are nearly identical, and the ecosystem effect of ~all tools supporting multiple models seems far stronger than the network effect of your parents using ChatGPT specifically.
I would say that, on this topic (ads on internet content), Ben Thompson may not be as objective a perspective as he has on other topics.
If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far).
But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult.
How much ?
do you realize how much product placement have been in movies since...well, the existence of movies?
They fail to mention Google's edge: Inter-Chip Interconnect and the allegedly 1/3 of price. Then they talk about software moat and it sounds like they never even compiled a hello world in either architecture. smh
And this comes out days after many in-depth posts like:
https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
A crude Google search AI summary of those would be better than this dumb blogpost.
(Unless those writings are looking to dehumanize or strip people of rights or inflame hate - I'm not talking about propaganda or hate speech here.)
With that said there's no accounting for taste.
The Substack founders unofficially marketed it early on as “Stratechery for independent authors”.
Your analysis concerning the technology instead of focusing on the business is about like Rob Malda not understanding the success of the “no wireless, less space than the Nomad lame”.
Even if you just read this article, he never argued that Google didn’t have the best technology, he was saying just the opposite. Nvidia is in good shape precisely because everyone who is not Google is now going to have to spend more on Nvidia to keep up.
He has said that AI may turn out to be a “sustaining innovation” first coined by Clay Christenson and that the big winners may be Google, Meta, Microsoft and Amazon because they can leverage their pre-existing businesses and infrastructure.
Even Apple might be better off since they are reportedly going to just throw a billion at Google for its model.
The belief that adding ads makes things better would be an extremely convenient belief for a writer to have, and I can easily see how that could result in them getting more revenue than other writers. That doesn't make it any less dumb.
Any use of LLMs by other people reduces his value.
eg Yuval Noah Harari, Bari Weiss, Matthew Yglesias
Discussing "innovator's dilemma" unironically is a fullstop for me.
There, fixed.
I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue.
I think this is intentional by Altman. He’s a salesman, after all. When there is infinite possibility, he can sell any type of vision of future revenue and margins. When there are no concrete numbers, It’s your word against his.
Once they try to monetize, however, he’s boxed in. And the problem with OpenAI vs. Google in the earlier days is that he needs money and chips now. He needs hundreds of billions of dollars. Trillions of dollars.
Ad revenue numbers get in the way. It will take time to optimize; you’ll get public pushback and bad press (despite what Ben writes, ads will definitely not be a better product experience.)
It might be the case that real revenue is worse than hypothetical revenue.
It’s extremely easy to write a library that makes switching between models trivial. I could add OpenAI support. It would be just slightly more complicated because I would have to have a separate set of API keys while now I can just use my AWS credentials.
Also of course latency would be theoretically worse since with hosting on AWS and using AWS for inference you stay within the internal network (yes I know to use VPC endpoints).
There is no moat around switching models unlike Ben says.
But, talk to any (or almost any) non-developer and you'll find they 1/ mostly only use ChatGPT, sometimes only know of ChatGPT and have never heard of any other solution, and 2/ in the rare case they did switch to something else, they don't want to go back, they're gone for good.
Each provider has a moat that is its number of daily users; and although it's a little annoying to admit, OpenAI has the biggest moat of them all.
I think the combination of AI overviews and a separate “AI mode” tab is good enough.
I would think that Gemini (the model) will add profit to Google way before OpenAI ever becomes profitable as they leverage it within their business.
Why would I pay for openrouter.ai and add another dependency? If I’m just using Amazon Bedrock hosted models, I can just use the AWS SDK and change the request format slightly based on the model family and abstract that into my library.
That said, I don't believe oai's models consistently produce the best results.
maybe another way of saying the same thing is that there is still a lot of work to make eval tooling a lot better!
Theres too much entropy in the system. Context babysitting is our future.
I’ve created a framework that lets me test the quality in automated way between prompt changes and models and I compare costs/speed/quality.
The only thing that requires humans to judge the qualify out of all those are RAG results.
One of Anthropics models did the best with image understanding with Amazon’s Nova Pro being slightly behind.
For my tests, I used a customer’s specific set of test data.
For RAG I forgot. But is much more subjective. I just gave the customer an ability to configure the model and modify the prompt so they could choose.
It seems to just be worse at actually doing what you ask.
I feel like it would be advantageous to move away from a "one model fits all" mindset, and move towards a world where we have different genres of models that we use for different things.
The benchmark scores are turning into being just as useful as tomatometer movie scores. Something can score high, but if that's not the genre you like, the high score doesn't guarantee you'll like it.
You had Watcom, Intel, GCC, Borland, Microsoft, etc.
They all had different optimizations and different target markets.
Best to make your tooling model agnostic. I understand that tuned prompts are model _version_ specific, so you will need this anyways.
https://thezvi.substack.com/p/gemini-3-pro-is-a-vast-intelli...
OpenAI's strategy is to eventually overtake search. I'd be curious for a chart of their progress over time. Without Google trying to distort the picture with Gemini benchmark results and usage stats which are tainted by sheer numbers from traditional search and their apps.
That's hardly an indication that actual "non-technical" consumers don't care, or that there is any sort of barrier to either using both apps or using whichever is better at the moment, or whichever is more helpful in generating the meme of the moment.
If it were actually true that OpenAI was "plenty good enough" for 99% of questions that people have, and that "there is no reason to switch" then OpenAI could just stop training new models, which is absurdly expensive. They aren't doing that, because they sensibly believe that having better models matters to consumers.
You're looking at this backwards. Being able to push Gemini into your face on Gmail, Gdocs, Google Search, Android, Android TV, Android Auto and Pixel devices sure is: Annoying, disruptive and unfair. But market-wise., it sure is a strength, not a weakness.
Google’s increasing revenues and profits and even Apple hinting at they aren’t seeing decreased revenue from their affiliation with Google hints at people not replacing Google search with ChatGPT.
Besides end user chatbot use is just a small part of the revenue from LLMs.
Google are giving away a year of Gemini Pro to students, which has seen a big shift. The FT reported today[0] that Gemini new app downloads are almost catching up to ChatGPT
[0] https://www.ft.com/content/8881062d-ff4f-4454-8e9d-d992e8e2c...
I think he’s wrong that OpenAI can win this by upping the revenue engine through ads or through building a consumer behavior moat.
At the end of the day these are chat bots. Nobody really cares about the url and the interface is simple. Google won search by having deeply superior search algorithms and capitalizing on user traffic data to improve and refine those algorithms. It didn’t win because of AdWords … it just got rich that way.
The AI market is an undifferentiated oligopoly (IMO) and the only way to win is by having better algos trained on more data that give better results. Google can win here. It is already winning on video and image generation.
I actually think OpenAI is (wrongly) following Ben’s exact advice — going to the edge and consumer interface through things like the acquisition of things like Jony Ives device company. This is a failing move and an area where Google can also easily win with Android. I agree with Ben that upping the revenue makes sense but they can’t do it at the cost of user experience. Too much at stake.
Google's revenue stream and structural advantages mean they can continue this forever and if another AI winter comes, they can chill because LLM-based AI isn't even their main product.
I think customer diversity correlates instead with resilience.
> More than anything, though, I believe in the market power and defensibility of 800 million users, which is why I think ChatGPT still has a meaningful moat.
It's 800M weekly active users according to ChatGPT. I keep hearing that once you segment paid and unpaid, daily ChatGPT users fall off dramatically (<10% for paid and far less for unpaid).
Customer diversity says nothing about current or future resilience.