OpenAI's cash burn will be one of the big bubble questions of 2026
244 points
6 hours ago
| 22 comments
| economist.com
| HN
avalys
3 hours ago
[-]
AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.

The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.

The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.

reply
jfrbfbreudh
50 minutes ago
[-]
Google’s moat:

Try “@gmail” in Gemini

Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?

reply
avalys
33 minutes ago
[-]
If the “moat” is not AI technology itself but merely sufficient other lines of business to deploy it well, then that’s further evidence that venture investments in AI startups will yield very poor returns.
reply
edaemon
12 minutes ago
[-]
That kind of makes it sound like AI is a feature and not a product, which supports avalys' point.
reply
dartharva
5 minutes ago
[-]
Also, Google doesn't have to finance Gemini using venture capital or debt, it can use its own money.
reply
fooblaster
3 hours ago
[-]
There is a pretty big moat for Google: extreme amounts of video data on their existing services and absolutely no dependence on Nvidia and it's 90% margin.
reply
simonsarris
1 hour ago
[-]
Google has several enviable, if not moats, at least redoubts. TPUs, mass infrastructure and own their own cloud services, they own delivery mechanisms on mobile (Android) and every device (Chrome). And Google and Youtube are still #1 and #2 most visited websites in the world.
reply
xivzgrev
1 hour ago
[-]
Not to mention security. I'd trust Google more not to have a data breach than open AI / whomever. Email accounts are hugely valuable but I haven't seen a Google data breach in the 20+ years I've been using them. This matters because I don't want my chats out there in public.

Also integration with other services. I just had Gemini summarize the contents of a Google Drive folder and it was effortless & effective

reply
devsda
1 hour ago
[-]
Don't forget the other moat.

While their competitors have to deal with actively hostile attempts to stop scraping training data, in Google's case almost everyone bends over backwards to give them easy access.

reply
nateb2022
3 hours ago
[-]
I have yet to be convinced the broader population has an appetite for AI produced cinematography or videos. Independence from Nvidia is no more of a liability than dependence on electricity rates; it's not as if it's in Nvidia's interest to see one of its large customers fail. And pretty much any of the other Mag7 companies are capable of developing in-house TPUs + are already independently profitable, so Google isn't alone here.
reply
ralph84
2 hours ago
[-]
The value of YouTube for AI isn't making AI videos, it's that it's an incredibly rich source for humanity's current knowledge in one place. All of the tutorials, lectures, news reports, etc. are great for training models.
reply
Nextgrid
2 hours ago
[-]
Is that actually a moat? Seems like all model providers managed to scrape the entire textual internet just fine. If video is the next big thing I don’t see why they won’t scrape that too.
reply
monocasa
2 hours ago
[-]
And we're probably already starting to see that, given the semirecent escalations in game of cat and also cat of youtube and the likes of youtube-dl.

Reminds me of Reddit's cracking down on API access after realizing that their data was useful. But I'd expect both youtube to be quicker on the gun knowing about AI data collection, and have more time because of the orders of magnitude greater bandwidth required to scrape video.

reply
jakeydus
11 minutes ago
[-]
And reddit turned around and sold it all for a mess of pottage…
reply
awesome_dude
1 hour ago
[-]
> Seems like all model providers managed to scrape the entire textual internet just fine

Google, though, has been doing it for literal decades. That could mean that they have something nobody else (except archive.org) has - a history on how the internet/knowledge has evolved.

reply
fooblaster
3 hours ago
[-]
If you think they are going to catch up with Google's software and hardware ecosystem on their first chip, you may be underestimating how hard this is. Google is on TPU v7. meta has already tried with MTIA v1 and v2. those haven't been deployed at scale for inference.
reply
nateb2022
2 hours ago
[-]
I don't think many of them will want to, though. I think as long as Nvidia/AMD/other hardware providers offer inference hardware at prices decent enough to not justify building a chip in-house, most companies won't. Some of them will probably experiment, although that will look more like a small team of researchers + a moderate budget rather than a burn-the-ships we're going to use only our own hardware approach.
reply
fooblaster
2 hours ago
[-]
Well, anthropic just purchased a million TPUs from Google because even with a healthy margin from Google, it's far more cost effective because of Nvidia's insane markup. That speaks for itself. Nvidia will not drop their margin because it will tank their stock price. it's half of the reason for all this circular financing - lowering their effective margin without lowering it on paper.
reply
fragmede
29 minutes ago
[-]
And, don't forget everyone's buying from TSMC in every case!
reply
margalabargala
2 hours ago
[-]
It's in Nvidia's interest to charge the absolute maximum they can without their customers failing. Every dollar of Nvidia's margin is your own lost margin. Utilities don't do that. Nvidia is objectively a way bigger liability than electricity rates.
reply
bdangubic
2 hours ago
[-]
it is in every business’s best interest to charge the maximum…
reply
wrs
37 minutes ago
[-]
Utilities and insurance companies are two examples of business regulated to not charge the maximum, for public policy reasons.
reply
bdangubic
1 minute ago
[-]
we suggesting that nvidia/google/.. be regulated for like utilities?
reply
Seattle3503
2 hours ago
[-]
The video data is probably good for training models, including text models.
reply
stevenjgarner
34 minutes ago
[-]
Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.
reply
fooblaster
3 hours ago
[-]
And yes, all their competitors are making custom chips. Google is on TPU v7. absolutely nobody is going to get this right on the first try among their competitors - Google didn't.
reply
CharlieDigital
2 hours ago
[-]
Bigger problem for late starts now is that it will be hard to match the performance and cost of Google/Nvidia. It's an investment that had to have started years ago to be competitive now.
reply
loloquwowndueo
1 hour ago
[-]
In this case the difference between its and it’s does alter the meaning of the sentence.
reply
nateb2022
3 hours ago
[-]
> AI is going to be a highly-competitive, extremely capital-intensive commodity market

It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.

The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.

But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.

> that ends up in a race to the bottom competing on cost and efficiency of delivering

One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.

> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.

I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.

---

Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.

reply
mark_l_watson
2 hours ago
[-]
> in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model

I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.

reply
sigbottle
1 hour ago
[-]
Very interested in this! I'm mainly a ChatGPT user; for me, o3 was the first sign of true "intelligence" (not 'sentience' or anything like that, just actual, genuine usefulness). Are these models at that level yet? Or are they o1? Still GPT4 level?
reply
logicprog
1 hour ago
[-]
Not nearly o3 level. Much better than GPT4, though! For instance Qwen 3 30b-a3b 2507 Reasoning gets 46 vs GPT 4's 21 and o3's 60-something on Artificial Analysis's benchmark aggregation score. Small local models ~30b params and below tend to benchmark far better than they actually work, too.
reply
airstrike
2 hours ago
[-]
> One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.

I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.

I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.

reply
zozbot234
2 hours ago
[-]
I don't think anyone knows for sure how much mileage/scalability LLMs have. Given what we do know, I suspect if you can afford to spend more compute on even longer training runs, you can still get much better results compared to SOTA, even for "simple" domains like text/language.
reply
airstrike
1 hour ago
[-]
I think we're pretty much out of "spend more compute on even longer training runs" atp.
reply
phyzix5761
2 hours ago
[-]
I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.
reply
raw_anon_1111
2 hours ago
[-]
We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.

AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.

2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch

3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing

4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],

5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.

reply
sod22
2 hours ago
[-]
I personally know people that used ChatGPT a lot but have recently moved to using Gemini.

There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.

OAI does not fit that description as of today.

reply
aprilthird2021
2 hours ago
[-]
I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?
reply
thom
1 hour ago
[-]
The regular google search AI doesn’t do thinky thinky mode. For most buying decisions these days I ask ChatGPT to go off and search and think for a while given certain constraints, while taking particular note of Reddit and YouTube comments, and come back with some recommendations. I’ve been delighted with the results.
reply
variadix
2 hours ago
[-]
This will remain the case until we have another transformer-level leap in ML technology. I don’t expect such an advancement to be openly published when it is discovered.
reply
energy123
2 hours ago
[-]

  > There's no evidence of a technological moat or a competitive advantage in any of these companies.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.

I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.

The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.

reply
wild_egg
2 hours ago
[-]
I still find it so fascinating how experiences with these models are so varied.

I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.

There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.

No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.

reply
yoyohello13
1 hour ago
[-]
There is always a comment like this in these threads. It’s just 50-50 whether it’s Claude or OpenAI.
reply
avalys
1 hour ago
[-]
I’m not saying that no company will ever have an advantage. But with the pace of advances slowing, even if others are 6-12 months behind OpenAI, the conclusion is the same.

Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).

reply
harrall
2 hours ago
[-]
I use both and ChatGPT will absolutely glaze me. I will intentionally say some BS and ChatGPT will say “you’re so right.” It will hilariously try to make me feel good.

But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.

Truthfully I just use both.

reply
gridspy
2 hours ago
[-]
I told ChatGPT via my settings that I often make mistakes and to call out my assumptions. So now it

1. Glazes me 2. Lists a variety of assumptions (some can be useful / interesting)

Answers the question

At least this way I don't spend a day pursuing an idea the wrong way because ChatGPT never pointed out something obvious.

reply
nubg
47 minutes ago
[-]
Care to share the system prompt?
reply
guluarte
1 hour ago
[-]
codex is sooo slow but it is good at planning, opus is good at coding but not at good at seeing the big picture
reply
johnnyanmac
2 hours ago
[-]
>That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.

I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.

All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.

reply
Davidzheng
55 minutes ago
[-]
Um meta didn't achieve the same results yet. And does it matter if they can all achieve the same results if they all manage high enough payoffs? I think subscription based income is only the beginning. Next stage is AI-based subcompanies encroaching on other industries (e.g. deepmind's drug company)
reply
adventured
2 hours ago
[-]
Your premise is wrong in a very important way.

The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.

Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.

Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.

The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.

Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.

There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.

reply
zozbot234
2 hours ago
[-]
> Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.

Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.

reply
thom
1 hour ago
[-]
DeepSeek certainly managed that on the training side but in terms of inference, the actual product was unusably slow and unreliable at launch and for several months after. I have not bothered revisiting it.
reply
bee_rider
3 hours ago
[-]
Massive upfront costs and second place is just first loser. It’s like building fabs but your product is infinitely copyable. Seems pretty rough.
reply
gerdesj
3 hours ago
[-]
What exactly is "second" place? No-one really knows what first place looks like. Everyone is certain that it will cost an arm, a leg and most of your organs.

For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.

The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.

It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!

reply
vkou
57 minutes ago
[-]
You and the other hobbyists aren't what's driving valuations. Enterprise subscriptions are.
reply
raw_anon_1111
2 hours ago
[-]
First place looks a lot like Google…
reply
guluarte
1 hour ago
[-]
Also that open source models are just months behind
reply
api
1 hour ago
[-]
If performance indeed asymptotes, and if we are not at the end of silicon scaling or decreasing cost of compute, then it will eventually be possible to run the very best models at home on reasonably priced hardware.

Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.

The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.

This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.

reply
adamnemecek
2 hours ago
[-]
AI is capital intensive because autodiff kinda sucks.
reply
dheera
1 hour ago
[-]
People seem to have the assumption that OpenAI and Anthropic dying would be synonymous with AI dying, and that's not the case. OpenAI and Anthropic spent a lot of capital on important research, and if the shareholders and equity markets cannot learn to value and respect that and instead let these companies die, new companies will be formed with the same tech, possibly by the same general group of people, thrive, and conveniently leave out the said shareholders.

Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.

reply
raw_anon_1111
26 minutes ago
[-]
You weren’t around pre Google were you? The only thing Google learned from other search engines is what not to do - like rank based on the number of times a keyword appeared and not to use expensive bespoked servers
reply
tootie
1 hour ago
[-]
Isn't it really the other way around? Not to say OpenAI and Anthropic haven't done important work, but the genesis of this entire market was paper on attention that came out of Google. We have the private messages inside OpenAI saying they needed to get to market ASAP or Google would kill them.
reply
ares623
3 hours ago
[-]
Just in time for a Government guaranteed backstop.
reply
gerdesj
2 hours ago
[-]
"AI is going to be a highly-competitive" - In what way?

It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.

reply
fcantournet
2 hours ago
[-]
Yes they did, at least twice in the 19th century. It was the largest financial crisis before 1929
reply
johnnyanmac
2 hours ago
[-]
It did. I question the issue of "what problem am I trying to solve" with AI, though. Transportation across a huge swath of land had a clear problem space, and trains offered a very clear solution; created dedicated railing and you can transport 100x the resources at 10x the speed of a horseman (and I'm probably underselling these gains). In times where trekking across a continent took months, the efficiencies in communication and supply lines are immediately clear.

AI feels like a solution looking for a problem. Especially with 90% of consumer facing products. Were people asking for better chatbots, or to quickly deepfake some video scene? I think the bubble popping will re-reveal some incredible backend tools in tech, medical, and (eventually) robotics. But I don't think this is otherwise solving the problems they marketed on.

reply
heavyset_go
1 hour ago
[-]
> AI feels like a solution looking for a problem.

The problem is increasing profits by replacing paid labor with something "good enough".

reply
johnnyanmac
22 minutes ago
[-]
Doesn't sound like a very profitable problem to solve. At least, not in the long term (which no one orchestrating this is thinking in).
reply
heavyset_go
7 minutes ago
[-]
Long term is feudalism, the short term is how we get there.
reply
MangoToupe
55 minutes ago
[-]
This is a use case that hasn't yet been proven out, though. "Good enough" for an executive may not be "good enough" to keep the company solvent, and there's no shortage of private equity morons who have no understanding of their own assets.
reply
heavyset_go
8 minutes ago
[-]
I agree, but it's the bet they're making. You don't end up with trillions in investment and valuations with chatbots and meme video generators.
reply
aaronblohowiak
1 hour ago
[-]
Your view is ahistorical.
reply
cmiles8
1 hour ago
[-]
AI is turning into the worst possible business setup for AI startups. A commodity that requires huge capital investment and ongoing innovation to stay relevant. There’s no room for someone to run a small but profitable gold mine or couple of oil wells on the side. The only path to survival is investing crazy sums just to stay relevant and keep up. Meanwhile customers have virtually zero brand loyalty so if you slip behind just a bit folks will swap API endpoints and leave you in the dust. It’s a terrible setup business wise.

There’s also no real moat with all the major models converging to be “good enough” for nearly all use cases. Far beyond a typical race to the bottom.

Those like Google with other products will just add AI features and everyone else trying to make AI their product will just get completely crushed financially.

reply
amkharg26
11 minutes ago
[-]
The comparison to railroad bubble economics is apt. OpenAI's infrastructure costs are astronomical - training runs, inference compute, and scaling to meet demand all burn through capital at an incredible rate.

What's interesting is the strategic positioning. They need to maintain leadership while somehow finding a sustainable business model. The API pricing already feels like it's in a race to the bottom as competition intensifies.

For startups building on top of LLM APIs, this should be a wake-up call about vendor lock-in risks. If OpenAI has to dramatically change their pricing or pivot their business model to survive, a lot of downstream products could be impacted. Diversifying across multiple model providers isn't just good engineering - it's business risk management.

reply
password54321
5 hours ago
[-]
Not sure why they put so much investment into videoSlop and imageSlop. Anthropic seems to be more focused at least.
reply
Ginden
3 hours ago
[-]
Because almost everyone involved in AI race grew up in "winner takes it all" environments, typical for software, and they try really hard to make it reality. This means your model should do everything to just take 90% of market share, or at least 90% of specific niche.

The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.

reply
worldsayshi
3 hours ago
[-]
> your competitors will be able to replicate in few months.

Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?

reply
AlotOfReading
3 hours ago
[-]
Is that not what distillation is?
reply
the_gipsy
3 hours ago
[-]
What does moat even mean anymore
reply
thisgetsit
3 hours ago
[-]
> copyright provides such a moat.

Been saying this since the 2016 Alice case. Apple jumped into content production in 2017. They saw the long term value of copyright interests.

https://arstechnica.com/information-technology/2017/08/apple...

Alice changed things such that code monkeys algorithms were not patentable (except in some narrow cases where true runtime novelty can be established.) Since the transformers paper, the potential of self authoring content was obvious to those who can afford to think about things rather than hustle all day.

Apple wants to sell AI in an aluminum box while VCs need to prop up data center agrarianism; they need people to believe their server farms are essential.

Not an Apple fanboy but in this case, am rooting for their "your hardware, your model" aspirations.

Altman, Thiel, the VC model of make the serfs tend their server fields, their control of foundation models, is a gross feeling. It comes with the most religious like sense of fealty to political hierarchy and social structure that only exists as hallucination in the dying generations. The 50+ year old crowd cannot generationally churn fast enough.

reply
CodingJeebus
2 hours ago
[-]
Totally agree, people love to talk about how hopelessly behind Apple is in terms of AI progress when they’re in a better position to compete directly against Nvidia on hardware than anyone else.
reply
madeofpalk
1 hour ago
[-]
Apple's always had great potential. They've struggled to execute on it.

But really, so has everyone else. There's two "races" for AI - creating models, and finding a consumer use case for them. Apple just isn't competing in creating models similar to the likes of OpenAI or Google. They also haven't really done much with using AI technology to deliver 'revolutionary' general purpose user-facing features using LLMs, but neither has anyone else beyond chat bots.

I'm not convinced ChatGPT as a consumer product can sustain current valuations, and everyone is still clamouring to find another way to present this tech to consumers.

reply
odo1242
1 hour ago
[-]
It's also why they bought 40% of the world's RAM supply, too
reply
sod22
3 hours ago
[-]
Striking deals without a proper vision is a waste of resources. And that’s the path OAI is on.
reply
dktp
2 hours ago
[-]
OpenAI is (was?) extremely good at making things that go viral. The successful ones for sure boost subscriber count meaningfully

Studio Ghibli, Sora app. Go viral, juice numbers then turn the knobs down on copyrighted material. Atlas I believe was a less successful than they would've hoped for.

And because of too frequent version bumps that are sometimes released as an answer to Google's launch, rather than a meaningful improvement - I believe they're also having harder time going viral that way

Overall OpenAI throws stuff at the wall and see what sticks. Most of it doesn't and gets (semi) abandoned. But some of it does and it makes for better consumer product than Gemini

It seems to have worked well so far, though I'm sceptical it will be enough for long

reply
johnnyanmac
1 hour ago
[-]
Going viral is great when you're a small team or even a million dollar company. That can make or break your business.

Going viral as a billion dollar company spending upward of 1T is still not sustainable. You can't pay off a trillion dollars on "engagement". The entire advertising industry is "only" worth 1T as is: https://www.investors.com/news/advertising-industry-to-hit-1...

reply
raw_anon_1111
2 hours ago
[-]
Selling a bunch of $20 a month subscriptions isn’t going to make a dent in OpenAI losses. Going viral for a day or two doesn’t help.

Normal people are already getting tired of AI Slop

reply
piskov
4 hours ago
[-]
Because as with the internet 99% of the usage won’t be for education, work, personal development, what have you. It will be for effing kitten videos and memes.
reply
only-one1701
3 hours ago
[-]
That’s an unusual way of saying uh…adult entertainment
reply
nine_k
4 hours ago
[-]
Are the posters of effing kitten videos a customer base with a significant LTV?

(The obvious well-paying market would be erotic / furry / porn, but it's too toxic to publicly touch, at least in the US.)

reply
piskov
4 hours ago
[-]
Openrouter stats already mention 52% usage is roleplay.

As for photo/video very large number of people use it for friends and family (turn photo into creative/funny video, change photo, etc.).

Also I would think photoshop-like features are coming more and more in chatgpt and alike. For example, “take my poorly-lit photo and make it look professional and suitable for linkedin profile”

reply
candiddevmike
4 hours ago
[-]
If only 99% of the Internet was kitten videos and memes
reply
piskov
4 hours ago
[-]
Well, it sure as hell not all 3blue1brown, crr0ww, Feynman, and alike
reply
Alconicon
4 hours ago
[-]
Because OpenAI stands for AI leader.

If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?

Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.

OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.

reply
JumpCrisscross
3 hours ago
[-]
> Who wants to copy&paste prompts between ai agents?

An AI!

The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.

reply
andrekandre
2 hours ago
[-]

  > But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
not an expert by any means, but wouldn't smaller but highly refined models also output more reproducible results?

intuitively it sounds akin to the unix model...

reply
nubg
41 minutes ago
[-]
But then again the main selling point of using LLMs as part of some code that solves a certain business need is that you don't have to finetune a usecase-specific model (like in the mid 2010s), you just prompt engineer a bit and it often magically works.
reply
password54321
4 hours ago
[-]
>Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.

I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.

reply
adastra22
4 hours ago
[-]
It is, to first approximation, the same thing. The generative part of genAI is just running the analysis model in reverse.

Now there are all sorts of tricks to get the output of this to be good, and maybe they shouldn't be spending time and resources on this. But the core capability is shared.

reply
kaoD
4 hours ago
[-]
> The generative part of genAI is just running the analysis model in reverse.

I think that hasn't been the case since DeepDream?

reply
mbreese
4 hours ago
[-]
I think you're partially right, but I don't think being an AI leader is the main motivation -- that's a side effect.

I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.

The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.

reply
nutjob2
2 hours ago
[-]
> Because OpenAI stands for AI leader.

It'll just end up spreading itself too thin and be second or third best at everything.

The 500lb gorilla in the room is Google. They have endless money and maybe even more importantly they have endless hardware. OpenAI are going to have an increasingly hard time competing with them.

That Gemini 3 is crushing it right now isn't the problem. It's Gemini 4 or 5 that will likely leave them in the dust for the general use case, meanwhile specialist models will eat what remains of their lunch.

reply
Cyclone_
3 hours ago
[-]
The fact that they do this isn't very bullish for them achieving whatever they define as AGI.
reply
fastball
3 hours ago
[-]
You don't expect AGI to be multi-modal?
reply
madeofpalk
1 hour ago
[-]
What is AGI?
reply
fittingopposite
9 minutes ago
[-]
Artificial general intelligence
reply
jdminhbg
4 hours ago
[-]
Because for all the incessant whining about "slop," multimodal AI i/o is incredibly useful. Being able to take a photo of a home repair issue, have it diagnosed, and return a diagram showing you what to do with it is great, and it's the same algos that power the slop. "Sorry, you'll have to go to Gemini for that use case, people got mad about memes on the internet" is not really a good way for them to be a mass consumer company.
reply
tayo42
3 hours ago
[-]
Can Claude not do that? I've sent it pictures for simpler things and got answers, usually Id of bugs and plants.
reply
esafak
3 hours ago
[-]
Yes, Claude is multi-modal.
reply
dyauspitr
3 hours ago
[-]
Because those and world models are the endgame, way way more than text is.
reply
johnnyanmac
2 hours ago
[-]
because these are mostly the same players of the 2010's. So when they can't get more investor money and the hard problems are still being cracked, the easiest fallback is the same social media slop they used to become successful 10-15 years prior. Falling back on old ways to maximize engagement and grind out (eventually) ad revenue.
reply
johnnyfived
4 hours ago
[-]
But how much more profitable are they? We see revenue but not profits / spending. Anthropic seems to be growing faster than OpenAI did but that could be the benefit of post-GPT hype.
reply
SAI_Peregrinus
5 hours ago
[-]
Because their main use is for advertising/propaganda, which is largely videoSlop & imageSlop even without AI.
reply
password54321
4 hours ago
[-]
Outside of this: https://openai.com/index/disney-sora-agreement/ I don't think there has been much of a win for them even in advertising for image/video slop.
reply
anomaly_
3 hours ago
[-]
It's like half the poster on here live in some parallel universe. I am making real money using generated image/video advertising content for both B2C and B2B goods. I am using Whisper and LLMs to review customer service call logs at scale and identity development opportunities for staff. I am using GPT/Gemini to help write SQL queries and little python scripts to do data analysis on my customer base. My business's productivity is way up since GenAI become accessible.
reply
bdangubic
3 hours ago
[-]
that (very vocal) half tried it once and it didn’t work :)
reply
zackify
10 minutes ago
[-]
Surprised they burn cash advertising on Reddit to “make a mini me” version of yourself where you hold your body in your hand. What a waste of AI lol
reply
adriand
5 hours ago
[-]
This article doesn’t add anything to what we know already. It’s still an open question what happens with the labs this coming year, but I personally think Anthropic’s focus on coding represents the clearest path to subscriber-based success (typical SaaS) whereas OpenAI has a clear opportunity with advertising. Both of these paths could be very lucrative. Meanwhile I expect Google will continue to struggle with making products that people actually want to use, irrespective of the quality of its models.
reply
doctaj
5 hours ago
[-]
What Google AI products do people not want to use? Gemini is catching up to chatpt from a MAU perspective, ai overviews in search are super popular and staggeringly more used than any other ai-based product out there, a Google ai mode has decent usage, and Google Lens has surprisingly high usage. These products together dwarf everyone else out there by like 10x.
reply
famouswaffles
2 hours ago
[-]
>Gemini is catching up to chatpt from a MAU perspective

It is far behind, and GPT hasn't exactly stopped growing either. Weekly Active Users, Monthly visits...Gemini is nowhere near. They're comfortably second, but second is still well below first.

>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there

Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.

reply
websiteapi
50 minutes ago
[-]
we're curious what your source is
reply
famouswaffles
18 minutes ago
[-]
reply
madeofpalk
1 hour ago
[-]
Is Gemini, as a chatbot, a product that sustains current valuations and investment?
reply
ajross
5 hours ago
[-]
> ai overviews in search are super popular and staggeringly more used than any other ai-based product out there

This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".

When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.

reply
esel2k
5 hours ago
[-]
Where does google struggle to make products people don’t want to use? Is it a personal opinion?
reply
_ache_
5 hours ago
[-]
Bart was a flop. Google search is losing market share to other LLM providers. Gemini adoption is low, people around me prefer OpenAI because it is good enough and known.

But on the contrary, Nano Banana is very good, so I don't know. And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.

reply
bee_rider
3 hours ago
[-]
If Google is producing very good models and they aren’t gaining much traction, that seems like a pretty bad sign for them, right? If they were failing with bad models, the solution would be easy: math and engineer harder, make better models (I mean, this is obviously very hard but it is a clear path). Failing with good models is… confusing, it indicates there’s some unknown problem.
reply
sod22
2 hours ago
[-]
It’s irrelevant, Google needs to focus on performance enhancements that the enterprise market segment demands - who only operate in the air of objectivity.

If they can achieve that they will cut off a key source of blood supply to MSFT+OAI. There is not much money in the consumer market segment from subscribers and entering the ad-business is going to be a lot tougher than people think.

reply
bloppe
2 hours ago
[-]
> Gemini adoption is low, people around me prefer OpenAI because it is good enough and known.

Gemini is built into Android and Google search. People may not be going to gemini.google.com, but that does not mean adoption is low.

reply
raw_anon_1111
2 hours ago
[-]
I mean it’s a simple Google search to show that Google isn’t loosing much market share to ChatGPT and most ChatGPT users still use Google.

https://searchengineland.com/nearly-all-chatgpt-users-visit-...

But even more importantly, it obviously isn’t losing money from advertisers to ChatGPT. You can look at their quarterly results.

reply
agentifysh
4 hours ago
[-]
what are you talking about Gemini adoption has tripled in a few months alone and have around 18% of marketshare and its accelerating.
reply
robkop
3 hours ago
[-]
I’ve heard too many rumors that much of that adoption is from copying ms i.e. bundling gemini into their office suite
reply
woooooo
2 hours ago
[-]
Gemini adoption via search is legit, though. I had a question, I got an answer, its not forced, fake adoption in that case.
reply
andrekandre
2 hours ago
[-]
gemini is in basically everything from google now, from google docs to firebase to android studio so i wouldn't be surprised...
reply
wg0
4 hours ago
[-]
Anti Gravity is a flop. I mean it uses Gemini under the hood.

But you cannot use it with an API key.

If you're on a workspace account, you can't have normal individual plan.

You have to have the team plan with $100/month or nothing.

Google's product management tier is beyond me.

reply
glial
4 hours ago
[-]
OK, but Gmail, Google Maps, Google Docs, and Google Search etc are ubiquitous. `Google' has even become a verb. Google might take a shotgun approach, but it certainly does create widely used products.
reply
nateb2022
3 hours ago
[-]
I will add that there's also Gemini in Chrome. With Chrome being the largest browser by market share, that's a powerful de facto default.
reply
andrekandre
2 hours ago
[-]

  > With Chrome being the largest browser by market share, that's a powerful de facto default.
where art thou anti-trust enforcement...
reply
raw_anon_1111
2 hours ago
[-]
Every personal computer user except Chromebook users went out of their way to download Chrome. What exactly do you want “anti trust” to do?
reply
andrekandre
1 hour ago
[-]
maybe not allow google to bundle gemini with chrome?
reply
raw_anon_1111
1 hour ago
[-]
So should we also not allow OpenAI to bundle the OpenAI model with the ChatGPT app

Absolutely no one besides ChromeOS users are forced to use Chrome.

reply
wg0
3 hours ago
[-]
That doesn't negate my original point.
reply
Davidzheng
49 minutes ago
[-]
There are other avenues of income. You can invade other industries which are slow on AI uptake and build an AI-from-ground competitor with large advantages over peers. There are hints of this (not AI-from-ground but with more AI) with deepmind's drug research labs. But this can be a huge source of income. You can kill entire industries which inevitably cannot incorporate AI as fast as AI companies can internally.
reply
johnnyanmac
1 hour ago
[-]
What "we" know already is hard to add to, as a forum that has a dozen AI articles a day on every little morsel of news.

>whereas OpenAI has a clear opportunity with advertising.

Personally, having "a clear opportunity with advertising" feels like a last ditch effort for a company that promised the moon in solving all the hard problems in the world.

reply
IshKebab
5 hours ago
[-]
I don't. Google has at least a few advantages:

1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.

2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.

3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.

The difference in model capability seems to be marginal at best, or even in Google's favour.

OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.

reply
Al-Khwarizmi
4 hours ago
[-]
And they have hardware as well, and their own cloud platform.

In my view Google is uniquely well positioned because, contrary to the others, it controls most of the raw materials for Ai.

reply
ok123456
4 hours ago
[-]
Google's most significant advantage in this space is its organizational experience in providing services at this scale, as well as its mature infrastructure to support them. When the bubble pops, it's not lights-out or permanently degraded performance.
reply
what
1 hour ago
[-]
Probably more people use googles AI than anything else. Every search result has an LLM generated summary at the top.
reply
Abh1Works
35 minutes ago
[-]
I think I super important aspect that people are overlooking, is that every VC wants to invest in the next "big" AI company, and the probability is in your favor to only give funding to AI companies, bc any one of them could be the next big thing. I think, with a downturn of VC investment, we will see some more investment in companies that arent AI native, but use AI as a tool in the toolbox to deliver insights.
reply
siliconc0w
2 hours ago
[-]
The best case I can see is they integrate shopping and steal the best high-intent cash cow commercial queries from G. It's not really about AI, it's about who gets to be the next toll road.
reply
therobots927
36 minutes ago
[-]
Google already puts AI summaries at the top of search. It would be trivial for them to incorporate shopping. And they have infinitely more traffic than OpenAI does. I just don’t see how OpenAI could possibly compete with that. What are you seeing that I’m not?
reply
gip
5 hours ago
[-]
There is no doubt that OpenAI is taking a lot of risks by betting that AI adoption will translate into revenues in the very short term. And that could really happen imo (with a low probability sure, but worth the risk for VCs? Probably).
reply
agentifysh
4 hours ago
[-]
It's mathematically impossible what OpenAI is promising. They know it. The goal is to be too big to fail and get bailed out by US taxpayers who have been groomed into viewing AI as a cold war style arms race that America cannot lose.
reply
Aurornis
2 hours ago
[-]
> The goal is to be too big to fail and get bailed out by US taxpayers

I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean? OpenAI failing wouldn’t mean AI disappears and all of their customers go bankrupt, too. It’s not like a bank. If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless. Someone would purchase it and run it again under a new company. We also have multiple AI companies and switching costs are not that high for customers, although some adjustment is necessary when changing models.

I don’t even know what people think this is supposed to mean. The US government gives them money for something to prevent them from filing for bankruptcy? The analogy to bank bailouts doesn’t hold.

reply
johnnyanmac
1 hour ago
[-]
>I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean?

Someone else put it succintly.

"When A million dollar company fails, it's their problem. When a billion dollar company fails, it's our problem"

In essence, there's so much investment in AI that it's a significant part of the US GDP. If AI falters, that is something that the entire stock market will feel, and by effect, all Americans. No matter how detached from tech they are. In other words, the potential for the another great depression.

In that regard, the government wants to avoid that. So they will at least give a small bailout to lessen the crash. But more likely (as seen with the Great Financial Crisis), they will likely supply billions upon billions to prop up companies that by all business logic deserved to fail. Because the alternative would be too politically damaging to tolerate.

----

That's the theory. These all aren't certain and there are arguments to suggest that a crash in AI wouldn't be as bad as any of the aforementioned crashes. But that's what people mean by "become too big to fail and get bailed out".

reply
zozbot234
1 hour ago
[-]
The closest analogy is the dot-com crash and there really wasn't any bailout for that, despite the short term GDP impact. And billion-dollar companies were involved back in the day too, like Apple, Microsoft, Amazon, Ebay etc. etc.
reply
OGEnthusiast
1 hour ago
[-]
OpenAI isn't a publicly-traded company though, how will it going to zero affect the stock market?
reply
johnnyanmac
47 minutes ago
[-]
OpenAI collapses and MSFT tanks. Microsoft shareholders aren't quite that dumb.

And that's ignoring the dominoes of other AI firms being pulled out of because OpenAi falters.

reply
OGEnthusiast
32 minutes ago
[-]
> Microsoft shareholders aren't quite that dumb.

If they aren't dumb, why are they investing in MSFT now then if it's a bubble that's doomed to fail? And even in the worst case scenario, a 10-15% decline in the S&P 500 won't trigger the next Great Depression. (Keep in mind that we already had a ~20% drawdown in public equities during the interest rate hikes of 2022/2023 and the economy remained pretty robust throughout.)

reply
johnnyanmac
23 minutes ago
[-]
Like I said, they aren't "that" dumb. They are playing a risky game, but when they see the number go down rapidly they will pull. Which will make the line go down even faster.

>And even in the worst case scenario, a 10-15% decline in the S&P 500 won't trigger the next Great Depression

Only if you believe the 10% decline won't domino and that the S&P500 is secluded from the rest of the global economy. I wish I shared your optimism.

> and the economy remained pretty robust throughout.

Yeah and we voted the person who orchestrated that out. We don't have the money to pump trillions back in a 2nd time in such a short time. Something's gonna give, and soon.

reply
OGEnthusiast
16 minutes ago
[-]
> Only if you believe the 10% decline won't domino and that the S&P500 is secluded from the rest of the global economy. I wish I shared your optimism.

So your hypothesis is that a 10% decline in the S&P 500 will trigger the next Great Depression, i.e. years of negative GDP growth and unemployment? I agree that it could cause a slight economic slowdown, but I don't think AI and tech stocks are a large enough part of the economy to cause a Great Depression-style catastrophe.

reply
liamconnell
1 hour ago
[-]
> Someone would purchase it and run it again under a new company.

That happened a long time ago! Microsoft already owns the model weights!

reply
senshan
3 hours ago
[-]
> It's mathematically impossible what OpenAI is promising

Citation is needed

reply
testing22321
3 hours ago
[-]
Don’t that have to make more money in the next 10 years than any company ever has… and that is just to break even.

It’s going to crash, guaranteed

reply
senshan
3 hours ago
[-]
It is the term "mathematically impossible" that caught my attention. Since it is about the future promise of OpenAI, one could debate the likelihood or "statistically improbable", but "mathematically impossible" implies some calculation, proof and certainty. Hence my curiosity.
reply
CharlieDigital
2 hours ago
[-]
I've seen some calculation I think from an HSBC analyst that it would take a monthly subscription of $200/mo. from some large portion of the US population for some insane number of years to break even.
reply
Aurornis
2 hours ago
[-]
> from some large portion of the US population

What a silly calculation.

OpenAI’s customer base is global. Using US population as the customer base is deliberately missing the big picture. The world population is more than 20X larger than the US population.

It’s also obvious that they’re selling heavily to businesses, not consumers. It’s not reasonable to expect consumers to drive demand for these services.

reply
johnnyanmac
1 hour ago
[-]
>OpenAI’s customer base is global.

I'd be willing to bet that, like many US websites, OpenAI's users are at lest 60% American. Just because there's 20x more people out there doesn't mean they have the same exposure to American products.

For instance, China is an obvious one. So that's 35%+ of the population already mostly out of consideration.

>It’s also obvious that they’re selling heavily to businesses, not consumers.

I don't think a few thousand companies can outspend 200m users paying $200 a month. I won't call it a "mathematical impossibility", but the math also isn't math-ing here.

reply
bloppe
2 hours ago
[-]
Bailing out OAI would be entirely unnecessary (crowded field) and political suicide (how many hundreds of billions that could have gone to health care instead?)

If it happens in the next 3 years, tho, and Altman promises enough pork to the man, it could happen.

reply
jjulius
1 hour ago
[-]
>Bailing out OAI would be ... political suicide (how many hundreds of billions that could have gone to health care instead?)

Not that I have an opinion one way or another regarding whether or not they'd be bailed out, but this particular argument doesn't really seem to fit the current political landscape.

reply
johnnyanmac
1 hour ago
[-]
This administration has "committed political suicide" dozens of times this year. What's one more to add to the pile?
reply
doctorpangloss
3 hours ago
[-]
on the one hand, i understand you are making a stylized comment, on the other hand, as soon as i started writing something reasonable, i realized this is an "upvote lame catastrophizing takes about" (checking my notes) "some company" thread, which means reasonable stuff will get downvoted... for example, where is there actual scarcity in their product inputs? for example, will they really be paying retail prices to infrastructure providers forever? is that a valid forecast? many reasonable ways to look at this. even if i take your cynical stuff at 100% face value, the thing about bailouts is that they're more complicated than what you are saying, but your instinct is to say they're not complicated, "grooming" this and "cold war" that, because your goal is to concern troll, not advance this site's goal of curiosity...
reply
krupan
2 hours ago
[-]
They've already spent so much money that even if they get any new hardware at a deep discount they will have a very hard time breaking even
reply
senshan
3 hours ago
[-]
Correction: OpenAI investors do take that risk. Some of the investors (e.g. Microsoft, Nvidia) dampen that risk by making such investment conditioned on boosting the investor's own revenue, a stock buyback of sorts.
reply
Alconicon
4 hours ago
[-]
Apparently we all have enough money to put it into OpenAI.

Some players have to play, like google, some players want to play like USA vs. China.

Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.

reply
bigyabai
4 hours ago
[-]
> Some players have to play, like google

I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.

reply
mvkel
5 hours ago
[-]
The fact is nobody has any idea what OpenAI's cash burn is. Measuring how much they're raising is not an adequate proxy.

For all we know, they could be accumulating capital to weather an AI winter.

It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.

reply
super256
4 hours ago
[-]
I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]

It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.

However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.

If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.

[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...

reply
reissbaker
4 hours ago
[-]
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training.

No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.

reply
doug_durham
4 hours ago
[-]
Prior to 5.2 you couldn’t expect to get good answers to questions prior to March 2024. It was arguing with me that Bruno Mars did not have two hit songs in the last year. It’s clear that in 2025 OpenAI used the old 4.0 base model and tried to supercharge it using RLVR. That had very mixed results.
reply
brokencode
2 hours ago
[-]
That just means their pretraining data set was older. You can train as many models as you want on the same data.

I’m sure all these AI labs have extensive data gathering, cleanup, and validation processes for new data they train the model on.

Or at least I hope they don’t just download the current state of the web on the day they need to start training the new model and cross their fingers.

reply
Imustaskforhelp
5 hours ago
[-]
Didn't they create Sora and other models and literally burned so much money with their AI video app which they wanted to make a social media but what ended up happening was that they burned billions of dollars.
reply
rozap
1 hour ago
[-]
I wonder about what happens to people who make these hilariously bad business decisions? Like the person at Twitter who decided to kill Vine. Do they spin it and get promotoed? Something else?

I'd love a blog or coffee table book of "where are they now" for the director level folks who do dumb shit like this.

reply
nl
1 hour ago
[-]
> It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.

This isn't really accurate.

Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.

Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.

Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.

reply
computerphage
5 hours ago
[-]
Why do you think they have not trained a new model since 4o? You think the GPT-5 release is /just/ routing to differently sized 4o models?
reply
wahnfrieden
5 hours ago
[-]
they're incorrect about the routing statement but it is not a newly trained model
reply
orbital-decay
4 hours ago
[-]
>It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4)

At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.

reply
brokencode
2 hours ago
[-]
People would have paid it if it was actually significantly better. It was a huge cost increase for a pretty minor performance increase.
reply
ajross
4 hours ago
[-]
> The fact is nobody has any idea what OpenAI's cash burn is.

Their investors surely do (absent outrageous fraud).

> For all we know, they could be accumulating capital to weather an AI winter.

If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.

[1] Again c.f. fraud

reply
andrewflnr
3 minutes ago
[-]
> For all we know, they could be accumulating capital to weather an AI winter.

Right, this is nonsense. Even if investors wanted to be complicit in fraud, it's an insane investment. "Give us money so we can survive the AI winter" is a pitch you might try with the government, but a profit-motivated investor will... probably not actually laugh in your face, but tell you they'll call you and laugh about you later.

reply
yojo
5 hours ago
[-]
They have not successfully trained a new model since 4o. That doesn’t mean they haven’t burned a pile of cash trying.

I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure.

reply
osiris970
2 hours ago
[-]
Gpt-5.2 was new pretrain run i believe
reply
sod22
4 hours ago
[-]
lol the typical AI boosters are down voting you.
reply
ta9000
5 hours ago
[-]
How are they updating the data then? Wouldn’t the cutoff date be getting further away from today?
reply
3eb7988a1663
4 hours ago
[-]
RAG? Even for a "fresh" model, there is no way to keep it up to date, so there has to be a mechanism by which to reference eg last night's football game.
reply
MagicMoonlight
5 hours ago
[-]
They’re just feeding a little bit of slop in every so often. Fine tuning rather than training a new one.
reply
micromacrofoot
5 hours ago
[-]
they're paying million dollar salaries to engineers and building data centers, it's not a huge mystery where their expenses are
reply
slashdave
4 hours ago
[-]
> they could be accumulating capital to weather an AI winter

Doubtful. This would be the very antithesis of the Silicon Valley way.

reply
wahnfrieden
5 hours ago
[-]
wasn't 4.5 new
reply
FergusArgyll
5 hours ago
[-]
Yes it was, op didn't read the reporting closely enough. It said something to the effect of "Didn't pretrain a new broadly released, generally available model"
reply
mvkel
1 hour ago
[-]
That's what I meant, though. That's the expensive part.
reply
Taek
5 hours ago
[-]
Wasn't 4.5 before 4o?
reply
dredmorbius
6 hours ago
[-]
Archive/Paywall: <https://archive.is/rHPk3>
reply
bariswheel
5 hours ago
[-]
thank you!
reply
HardCodedBias
4 hours ago
[-]
OpenAI has #5 traffic levels globally. Their product-market fit is undeniable. The question is monetization.

Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.

While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.

reply
pclmulqdq
4 hours ago
[-]
It's easy to get product-market fit when you give away dollars for the price of pennies.
reply
PeterHolzwarth
4 hours ago
[-]
Yes, but that is the standard methodology for startups in their boost phase. Burn vast piles of cash to acquire users, then find out at the end if a profitable business can be made of it.
reply
EA-3167
3 hours ago
[-]
It’s also the standard methodology for a number of scams.
reply
Analemma_
2 hours ago
[-]
Most startups have big upfront capital costs and big customer acquisition costs, but small or zero marginal costs and COGS, and eventually the capital costs can slow down. That's why spending big and burning money to get a big customer base is the standard startup methodology. But OpenAI doesn't have tiny COGS: inference is expensive as fuck. And they can't stop capex spending on training because they'll be immediately lapped by the other frontier labs.

The reason people are so skeptical is that OpenAI is applying the standard startup justification for big spending to a business model where it doesn't seem to apply.

reply
famouswaffles
1 hour ago
[-]
>But OpenAI doesn't have tiny COGS: inference is expensive as fuck.

No, inference is really cheap today, and people saying otherwise simply have no idea what they are talking about. Inference is not expensive.

reply
robkop
3 hours ago
[-]
Does that cost to serve multiple stay the same when conventional sites are forced to shovel ai into each request? e.g. the new google search
reply
doctorpangloss
3 hours ago
[-]
it's a simple problem really. what is actually scarce?

a spot on the iOS home screen? yes.

infrastructure to serve LLM requests? no.

good LLM answers? no.

the economist can't tell the difference between scarcity and real scarcity.

it is extremely rare to buy a spot on the iOS home screen, and the price for that is only going up - think of the trend of values of tiktok, whatsapp and instagram. that's actually scarce.

that is what openai "owns." you're right, #5 app. you look at someone's home screen, and the things on it are owned by 8 companies, 7 of which are the 7 biggest public companies in the world, and the 8th is openai.

whereas infrastructure does in fact get cheaper. so does energy. they make numerous mistakes - you can't forecast retail prices Azure is "charging" openai for inference. but also, NVIDIA participates in a cartel. GPUs aren't actually scarce, you don't actually need the highest process nodes at TSMC, etc. etc. the law can break up cartels, and people can steal semiconductor process knowledge.

but nobody can just go and "create" more spots on the iOS home screen. do you see?

reply
somewhereoutth
2 hours ago
[-]
depends if they can monetize that spot. So either ads or subscription. It is as yet unclear whether ads/subscription can generate sufficient revenue to cover costs and return a profit. Perhaps 'enough ads' will be too much for users to bear, perhaps 'enough subscription' will be too much for users to afford.
reply
jeffbee
4 hours ago
[-]
For what I use them for, the LLM market has become a two player game, and the players are Anthropic and Google. So I find it quite interesting that OpenAI is still the default assumption of the leader.
reply
gizmodo59
2 hours ago
[-]
Only in HN and some reddit subs I even see the name claude. In many countries AI=ChatGPT.
reply
bjt
2 hours ago
[-]
And at one point in the 90s, Internet=Netscape Navigator.

I see Google doing to OpenAI today what Microsoft did to Netscape back then, using their dominant position across multiple channels (browser, search, Android) to leverage their way ahead of the first mover.

reply
jeffbee
1 hour ago
[-]
That's funny, the way I see it is Microsoft put tens of billions of dollars behind an effort to catch Google on the wrong foot, or at least make Google look bad, but they backed the wrong guy and it isn't quite going to make it to orbit.
reply
reilly3000
2 hours ago
[-]
codex cli with gpt-5.2-codex is so reliably good, it earns the default position in my book. I had cancelled my subscription in early 2024 but started back up recently and have been blown away at how terse, smart, and effective it is. Their CLI harness is top-notch and it manages to be extremely efficient with token usage, so the little plan can go for much of the day. I don’t miss Claude’s rambling or Gemini’s random refactorings.
reply
joegibbs
2 hours ago
[-]
Interestingly Claude is so far down in traffic it's below things like CharacterAI, it's the best model but it's something like 70% ChatGPT, 10% Gemini and Claude is only 1% or so
reply
acoustics
4 hours ago
[-]
ChatGPT dominates the consumer market (though Nano Banana is singlehandedly breathing some life into consumer Gemini).

A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.

reply
dom96
4 hours ago
[-]
When ChatGPT starts injecting ads or forcing payment or doing anything else that annoys its userbase then the young people won't have a problem looking for alternatives

This "moat" that OpenAI has is really weak

reply
PeterHolzwarth
4 hours ago
[-]
They took early steps to do so (ads) just recently. User response was as you'd expect.
reply
jeffbee
4 hours ago
[-]
That's pretty nuts. With the models changing so much and so often, you have to switch it up sometimes just to see what the other company is offering.
reply
adastra22
4 hours ago
[-]
How often do you or people you know use a search engine other than google?
reply
jeffbee
3 hours ago
[-]
That is different because all of the players I mentioned have credible, near-leading products in the AI model market, whereas nobody other than Google has search results worth a damn. I wouldn't recommend anyone squander their time by checking Kagi or DDG or Bing more than once.
reply
adastra22
2 hours ago
[-]
I don't use google. Believe it or not, I get better results via Bing (usually via DDG, which is a frontend for Bing). But I asked the rhetorical question expecting the answer you gave. These people use ChatGPT only for the same reason you exclusively use Google.
reply
andsoitis
5 hours ago
[-]
In a parallel universe, governments invest in the compute/datacenters (read: infra), and let model makers compete on the same playing field.
reply
aucisson_masque
5 hours ago
[-]
I’d rather stay far away from this parallel universe.

Why would you want my money to be used to build datacenter that won’t benefit me ? I might use a LLM once a month, many people never use it.

Let the one who use it pay for it.

reply
3eb7988a1663
5 hours ago
[-]
You are already paying for several national lab HPC centers. These are used for government/university research - no idea if commercial interests can rent time on them. The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.
reply
serf
4 hours ago
[-]
>The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.

these things constitute public goods that benefit the individual regardless of participation.

reply
fdr
4 hours ago
[-]
The biggest run classified nuclear stockpile loads, at least in the US. They cost about half a billion apiece. And are 30 (carefully cooled and cabled) megawatts. https://en.wikipedia.org/wiki/El_Capitan_(supercomputer)

No chance they're going to take risks to share that hardware with anyone given what it does.

The scaled down version of El Capitan is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne. Not long ago, it was nevertheless still a top ten supercomputer.

Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.

reply
nine_k
4 hours ago
[-]
Many more people materially benefit from e.g. good weather forecasts than form video slop generation.
reply
nialv7
5 hours ago
[-]
if datacenters are built by the government, then i think it's fair to assume there will be some level of democratic control of what those datacenters will be used for.
reply
quantified
4 hours ago
[-]
What's the democratic control of existing resources? I would make the opposite assumption, it would be captured by the wealthiest interests.
reply
shimman
4 hours ago
[-]
This is literally the current system... adding more democratic controls is a good thing, the alternative is that only rich control these systems and would you look at it only the rich control these systems.

Uncanny really.

reply
nine_k
4 hours ago
[-]
Certainly! Your congress representatives would be voting on how to allocate its computing power. (Do you remember who did you vote for last time?)
reply
GaryBluto
3 hours ago
[-]
Sure. Same for healthcare and education right? If you don't have a child or need medical attention, why should you pay for them?
reply
inerte
5 hours ago
[-]
That's like every government initiative. Same as healthcare? School? I mean if you don't have children why do you pay taxes... and roads if you don't drive? I mean the examples are so many... why do you bring this argument that if it doesn't benefit you directly right now today, it shouldn't be done?
reply
zdragnar
5 hours ago
[-]
There are arguments aplenty that schooling and a minimum amount of healthcare are public goods, as are roads built on public land (the government owns most roads after all).

What is the justification for considering data centers capable of running LLMs to be a public good?

There are many counter examples of things many people use but are still private. Clothing stores, restaurants and grocery stores, farms, home appliance factories, cell phone factories, laundromats and more.

reply
reverserdev
4 hours ago
[-]
Libraries with books are likely considered public goods right?

Why not an LLM datacenter if it also offers information? You could say it's the public library of the future maybe.

reply
zdragnar
4 hours ago
[-]
Not all libraries are publicly owned or accessible. Most are run by local municipalities because they wouldn't exist otherwise.

Data centers clearly can exist without being owned by the public.

reply
bjt
3 hours ago
[-]
So can bookstores.
reply
wahnfrieden
5 hours ago
[-]
a distinction: the data centers have become the means of production, unlike clothing from a store
reply
zdragnar
4 hours ago
[-]
How is that distinct from any of my other examples which listed factories? Very few factories in the US are publicly owned; citing data centers as places of production merely furthers the argument that they should remain private.
reply
magpi3
5 hours ago
[-]
Healthcare, schools, roads, generative AI. One of these things is not like the others.
reply
inerte
5 hours ago
[-]
We gave incentives to broadband, why not generative AI?
reply
wat10000
5 hours ago
[-]
Last-mile services like roads, electricity, water, and telecommunications are natural monopolies. Normal market forces fail somewhat and you want some government involvement to keep it running smoothly.

This is not at all true of generative AI.

reply
llmslave2
3 hours ago
[-]
I have no idea why you're being downvoted because you're right. The entire point of taxation is to spread the cost among everyone, and since everyone doesn't utilise every government service every tax payer ends up paying for stuff they don't use. That like, the whole point...
reply
simonw
5 hours ago
[-]
If that did happen, how would the government then issue those resources?

OpenAI ask for 1m GPUs for a month, Anthropic ask for 2m, the government data center only has 500,000, and a new startup wants 750,000 as well.

Do you hand them out to the most convincing pitch? Hopefully not to the biggest donor to your campaign.

Now the most successful AI lab is the one that's best at pitching the government for additional resources.

UPDATE: See comment below for the answer to this question: https://news.ycombinator.com/item?id=46438390#46439067

reply
3eb7988a1663
5 hours ago
[-]
National HPC labs have been over subscribed for decades with extensive queueing/time sharing allocation systems.

It would still likely devolve into most-money-wins, but it is not an insurmountable political obstacle to arrange some sort of sharing.

Edit: I meant to say over subscribed, not over provisioned. There are far more jobs in the queue than can be handled at once

reply
simonw
5 hours ago
[-]
Huh, TIL - thanks for the correction.

https://www.ornl.gov/news/doe-incite-program-seeks-2026-prop...

> The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program has announced the 2026 Call for Proposals, inviting researchers to apply for access to some of the world’s most powerful high-performance computing systems.

> The proposal submission window runs from April 11 to June 16, 2025, offering an opportunity for scientific teams to secure substantial computational resources for large-scale research projects in fields such as scientific modeling, simulation, data analytics and artificial intelligence. [...]

> Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora and Frontier and 100,000 to 250,000 node-hours on Polaris, with the possibility of larger allocations for exceptional proposals. [...]

> The selection process involves a rigorous peer review, assessing both scientific merit and computational readiness. Awards will be announced in November 2025, with access to resources beginning in 2026.

Not sure OpenAI/Anthropic etc would be OK with a six month gap between application and getting access to the resources, but this does indeed demonstrate that government issued super-computing resources is a previously solved problem.

reply
pear01
5 hours ago
[-]
Well, people bid for USA government resources all the time. It's why the Washington DC suburbs have some of the country's most affluent neighborhoods among their ranks.

In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.

The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.

reply
cambrianentropy
5 hours ago
[-]
In a completely alternate dimension, a quarter of the capital being invested in AI literally just goes towards making sure everyone has quality food and water.
reply
GaryBluto
3 hours ago
[-]
As we all know, throwing money at a problem solves it completely. Remember how Live Aid saved Ethiopia from starvation and it never had any problems again?
reply
marcellus23
5 hours ago
[-]
I'd rather live in a universe where that money is taken out of the military budget.
reply
serf
4 hours ago
[-]
you'll never win that argument, but I absolutely agree.

people have no idea about how big the military and defense budgets worldwide are next to any other example of a public budget.

throw as many pie charts out as you want; people just can't see the astronomical difference in budgets.

I think it's based on how the thing works; a good defense works until it doesn't -- the other systems/budgets in place have a bit more of a graceful failure. This concept produces an irrationality in people that produces windfalls of cash availability.

reply
dmitrygr
5 hours ago
[-]
Without capital invested in the past we wouldn’t have almost anything of modern technology. That has done a lot more for everyone, including food affordability, than actually simply buying food for people to eat once.
reply
zozbot234
4 hours ago
[-]
Datacenters are not a natural monopoly, you can always build more. Beyond what the public sector itself might need for its own use, there's not much of a case for governments to invest in them.
reply
andy99
5 hours ago
[-]
That could make sense in some steady state regime where there were stable requirements and mature tech (I wouldn’t vote for it but I can see an argument).

I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?

reply
Mountain_Skies
2 hours ago
[-]
Given where we are posting, the motive is obvious: to socialize the riskiest part of AI while the investors retain all the potential upside. These people have no sense of shame so they'll loudly advocate for endless public risk and private rewards.
reply
JumpCrisscross
3 hours ago
[-]
> governments invest in the compute/datacenters (read: infra), and let model makers compete on the same playing field

Hmm, what about member-owned coöperatives? Like what we have for stock exchanges.

reply
JoshTriplett
4 hours ago
[-]
In a better parallel universe, we found a different innovation without using brute-force computation to train systems that unreliably and inefficiently compute things and still leaves us able to understand what we're building.
reply
websiteapi
5 hours ago
[-]
why would they do that? not to mention governments are already doing that indirectly by taking equity stakes in some of the companies.
reply
candiddevmike
5 hours ago
[-]
Same reason they should own access lines: everyone needs rackspace/access, it should be treated like a public service to avoid rent seeing. Having a data center in every city where all of the local lines terminate into could open the doors to a lot of interesting use cases, really help with local resiliency/decentralization efforts, and provide a great alternative to cloud providers that doesn't break the bank.
reply
websiteapi
5 hours ago
[-]
should the government own all types of "public services"? e.g. search index, video serving infra, etc?
reply
toxic72
4 hours ago
[-]
Public ownership of public services hmm?
reply
paulryanrogers
4 hours ago
[-]
Smells like socialism. Around here we privatize the profits and only socialize the costs. Like the impending bailout of the most politically connected AI companies.
reply
nutjob2
5 hours ago
[-]
That sounds like a nightmare.
reply
paulcole
5 hours ago
[-]
Do you like this idea?
reply
wat10000
5 hours ago
[-]
That seems like a terrible idea. Data centers aren’t a natural monopoly. Regulate the externalities and let it flourish.
reply
amelius
2 hours ago
[-]
Don't forget that the internet exists because of government agencies.
reply
wat10000
3 minutes ago
[-]
I'm not sure how that's relevant here.
reply
Mountain_Skies
2 hours ago
[-]
Cool. Time for the government to seize Amazon.
reply
Zigurd
5 hours ago
[-]
Prediction: on this thread you'll get a lot of talk about how government would slow things down. But when the AI bubble starts to look shaky, see how fast all the tech bros line up for a "public private partnership."
reply
echelon
5 hours ago
[-]
That's malinvestment. Too much overhead, disconnected from long term demand. The government doesn't have expertise, isn't lean and nimble. What if it all just blows over? (It won't? But who knows?)

Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.

The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.

reply
GannaIlinykh
5 hours ago
[-]
Burn rate often gets treated as a hard signal, but it is mostly about expectations. Once people get used to the idea of cheap intelligence, any slowdown feels like failure, even if the technology is still moving forward. That gap is usually where bubbles begin.
reply
laweijfmvo
5 hours ago
[-]
why does the article used words like burn and incinerate, implying that OpenAI is somehow making money disappear or something? They’re spending it; someone is profiting here, even if it’s not OpenAI. Is it all Nvidia?
reply
zaphar
5 hours ago
[-]
Because that's normal language idioms in financial analysis reporting?
reply
popalchemist
5 hours ago
[-]
Because typically one expect a return on investment with that level of spending. Not only have they run at a loss for years, their spending is expected to increase, with no path to profitability in sight.
reply
bdangubic
5 hours ago
[-]
not that I disagree but would it be fair to say though that we have seen this before where it turned out OK? say Uber? Amazon?
reply
jcranmer
5 hours ago
[-]
IIRC, current estimates are that OpenAI is losing as much money a year as Uber or Amazon lost in their entire lifetime of unprofitability. Also, both Uber and Amazon spent their unprofitable years having a clear roadmap to profitability. OpenAI's roadmap to profitability is "???"
reply
bdangubic
4 hours ago
[-]
I have lived through Amazon’s rags to riches and there was never a clear plan to profitability. Vast majority of people were questioning sanity of anyone investing in Amazon.

I am not saying OpenAI is Amazon but am saying I have seen this before where masses are going “oh business is bad, losses are huge, where is path to profitability…”

reply
esafak
14 minutes ago
[-]
Your recollection is hazy. Bezos chose not to be profitable in order to grow the company, and reap greater rewards in the future. https://www.youtube.com/shorts/wjLs22dNOCE
reply
oh_my_goodness
5 hours ago
[-]
To become the next Uber, do I just need to run huge losses?
reply
bdangubic
4 hours ago
[-]
I wouldn’t but path to success can clearly come from running 10-digit losses for a loooong time, no?
reply
oh_my_goodness
4 hours ago
[-]
I think you're saying that just running up huge losses is sufficient to create a successful company? But that you personally wouldn't want to run up huge losses? Not sure.
reply
bdangubic
1 hour ago
[-]
nah, I am saying that many (super) successful businesses ran in red financially for a very long time. I would not run a business that way but I am also (fortunately) not a CEO of a multibillion dollar company
reply
toxic72
5 hours ago
[-]
To my knowledge Amazon never debt financed their ops like this
reply
oh_my_goodness
5 hours ago
[-]
Amazon did borrow money, for a long time.
reply
dylan604
4 hours ago
[-]
Where did their financing come from then?
reply
samx18
2 hours ago
[-]
They had a cash-cow called AWS to keep the retail business afloat
reply
bdangubic
34 minutes ago
[-]
not right away
reply
lexicality
5 hours ago
[-]
I suspect most of it is going to utilities for power, water and racking.

That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.

reply
slashdave
4 hours ago
[-]
How many gold toilets do you need? I mean, I don't even own one.
reply
lexicality
4 hours ago
[-]
Tragically I don't make CEO money so I also don't have one but I presume you'd want to have at least one per mansion and another one in the office. Maybe a separate one for special occasions.
reply
simonw
5 hours ago
[-]
Your burn is the money you spend that exceeds the money you earn, see also "burn rate".
reply
wat10000
5 hours ago
[-]
“Burn rate” is a standard financial term for how much money a startup is losing. If you have $1 cash on hand and a burn rate of $2 a year, then you have six months before you either need to get profitable, raise more money, or shut down.
reply
nutjob2
4 hours ago
[-]
> They’re spending it

That's what the words mean in this context.

reply
Avicebron
5 hours ago
[-]
On the radio they mentioned that the total global chocolate market is ~100B, I googled it when I was home and it seems to be about ~135B. Apparently that is ... all chocolate, everywhere.. OpenAI's valuation is about 500B. Maybe going up to like 835B.

I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.

... so crash early 2026?

reply
cameronh90
4 hours ago
[-]
Ignoring that those numbers aren't directly comparable, it did make me wonder, if I had to give up either "AI" or chocolate tomorrow, which would I pick?

Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.

OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.

reply
hervature
3 hours ago
[-]
I am just trying to help you write better. Your writing says "if I had to give up either AI or chocolate [...] I would probably choose AI". However, your language and intent seems to be that you would give up chocolate.
reply
NewJazz
3 hours ago
[-]
If you really wanted to know you could stop eating chocolate or stop using ai and see if you break. Or do both at different times and see how long you last without one or the other.
reply
NewJazz
5 hours ago
[-]
Wait, aren't you comparing revenue and market cap?

People take old things for granted often. Explains the Coolidge effect, and why plenty of people cheat.

reply
vegabook
2 hours ago
[-]
yes. stock v flow error again ("company X cap bigger than country Y GDP" another all too common version).
reply
CryptoBanker
2 hours ago
[-]
I spend a lot more time using AI for work than I do eating chocolate
reply
kyyt
4 hours ago
[-]
I love AI, and ChatGPT has been transformative for me. But would I give it up for chocolate? I honestly don't think I could.
reply
g-unit33
48 minutes ago
[-]
Well they got $40B more to burn lol
reply
agentifysh
4 hours ago
[-]
2008: US Banks pump stocks -> market correction -> taxpayer bailout

2026: US AI companies pump stocks -> market correction -> taxpayer bailout

Mark my words. OpenAI will be bailed out by US taxpayers.

reply
smj-edison
2 hours ago
[-]
In 2008 the US government ended up making more money then they spent though (at least with the tarp), because they invested a ton of money after everything collapsed, and thus was extremely cheap. Once the markets recovered, they made a hefty sum selling all the derivatives they got at the lowest point. Seems like the epitome of buy when low and sell when high tbh.
reply
bjt
2 hours ago
[-]
I'll take that bet.

Banks get bailed out because if confidence in the banking system disappears and everyone tries to withdraw their money at once, the whole economy seizes up. And whoever is Treasury Secretary (usually an ex Wall Street person) is happy to do it.

I don't see OpenAI having the same argument about systemic risk or the same deep ties into government.

reply
zozbot234
2 hours ago
[-]
Even in a bank bailout, the equity holders typically get wiped out. It's really not that different from a bankruptcy proceeding, there's just a whole lot more focus on keeping the business itself going smoothly. I doubt OpenAI want to be in that kind of situation.
reply
senshan
3 hours ago
[-]
Not really. It was not about stocks. It was the collapse of insurance companies at the core of 2008 crisis.

The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):

As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.

https://www.imf.org/en/Publications/global-financial-stabili...

reply
deadbabe
1 hour ago
[-]
What does it mean for the AI bubble to pop? Everyone stops using AI en masse and we go back to the old ways? Cloud based AI no longer becomes an available product?
reply
stephen_g
1 hour ago
[-]
I think it mostly just means a few hundred billion dollars of value wiped from the stock market - all the models that have been trained will still exist, as well as all the datacentres, even if the OpenAI entity itself and some of the other startups shut down and other companies else get their assets for pennies on the dollar.

But it might mean that LLMs don't really improve much from where they are today, since there won't be the billions of dollars to throw at training for small incremental improvements that consumers mostly don't care to pay anything for.

reply
geldedus
5 hours ago
[-]
paywall, no upvote
reply
Skyy93
5 hours ago
[-]
Someone posted already the non-paywall version: https://news.ycombinator.com/item?id=46438679
reply
camillomiller
27 minutes ago
[-]
I know that the US is allergic to psychiatric diagnosis, but I would strongly invite everyone to learn about what dark triad traits are, and how Sam Altman fits that in a scary way. We have all collectively fell pray of the machinations of one of the most disturbing individuals who ever worked in tech. It’s quite clear to anyone who knows him, and even to anyone who cared to ever read anything about him. He won’t be responsible for AGI and its consequences, because that’s a lie, but he will have absolutely no issue to be responsible for the financial crisis that his doings will bring about. It’s like watching a train wreck waiting to happen, while everyone praise the madman conduct or who’s operating the locomotor at full steam.
reply
neilv
7 minutes ago
[-]
I don't think that's diagnosis (as a clinical term); closer to defamation.

Is it necessary to a point you want to make?

You can just point to behavior of a given entity, such as to conclude it's untrustworthy, without the problematic area of armchair psychoanalysis.

reply