You can see Nvidia stepping in throughout the ecosystem with confidence boosting investments where needed. They haven't just supported Anthropic and OpenAI.
If OpenAI and Anthropic succeed, and get their business fly-wheels fully spinning, they don't necessarily need more capital from Huang. Ultimately the goal of Nvidia is to profit from their long-term success by selling them GPUs for a long, long time. The goal isn't to keep plowing money into them forever.
Just about all of the AI providers "raises" are a fraction of the reported "raise", like this one.
They didn't "raise" $100b. They got commitments for $35b, with said commitments being dependent on meeting certain criteria.
Every "raise of $FOO" I've seen in the past year or two has not resulted in them getting their hands on $FOO in cash to spend.
ok, sounds obvious
> Nvidia, for its part, isn’t offering much more on the matter
ok, so no more news from nvidia
> Still, a few other dynamics might also explain the pullback..
Wait it's a pullback?
This is terrible reporting, right?
https://nvidianews.nvidia.com/news/nvidia-announces-financia...
It doesn't matter if the consumer market is 4T, if the AI market is 60T!
Most people would pick up both.
These economic proclamations don’t seem to make sense, when applied to different contexts — which suggests what you’re saying might be folk wisdom rather than sound theory (and greatly over simplifying the problem).
You’re also discounting ecosystem effects — gaming GPUs driving demand for datacenter and workstation GPUs as hobbyist experimentation turns into industrial usage. We don’t know what would happen if nVidia stopped suppressing the GPU market, because it’s never been tried — nVidia has always viciously undercut their own grassroots.
On the other hand right now the market doesn't seem to think that the >60bn of datacenter revenue is going away or even going to slow down _growing_ any time soon. Just adding 10% more revenue there is worth more than doubling their GPU business which they likely can't do.
Gaming and CAD market are real expectations that latch to reality. Grow the education systems and grow both. So is matrix math, such as hashing.
AI has reached a state of software issue, not hardware. And the divergence of AI hardware does not equate to CAD and Gaming math.
The only winning strategy for these guys is to exploit the market for all it's worth during shortages and carefully control production to manage the inevitable gluts.
Citation very much needed.
At the very least, OpenAI seems to believe more and larger datacenters is the path to better models... and they've been right about that every time so far.
Does that mean they produce better slop, or more slop faster?
Every GW of Blackwell generates more revenue than the entire gaming business does in 1 year.
Nvidia is best known for selling huge volumes of GPUs to the hyperscalers & neoclouds, but I don't think lots of folks appreciate how many GPUs ISVs like Snowflake, Databricks, Teradata, etc consume, too, just by virtue of designing much of their internal products around CUDA & Nemotron.
I don't think it's as easy as others say, though.
NVIDIA has released NVIDIA Deep Learning Super Sampling (DLSS) and a Frame Generation model, NVIDIA Super Resolution (VSR) being the most popular/well known models. (DLSS is outstanding technology, despite the sometimes misleading marketing).
Nvidia has released countless models:
Alpamayo 1 (Car navigation model) Cosmos-Reason2 (reasoning vision language model) Nemotron 3 (Large Language Model series) Llama-Nemotron (Large Language Model series) Isaac GR00T (VLA Models) Nemotron OCR (Optical Character Recognition models)
Take a look at their HuggingFace Collections, almost 100 different collections with countless models inside each collection: https://huggingface.co/nvidia/collections
nVidia has an open position for system architect, orbital station AI datacenter
Are you suggesting they're lacking on the ultra-high-end? That is: 5-10M+ in comp to sign a single researcher/IC; industry rock star territory.
Major frontier AI labs do tend to have that type of talent in abundance. I'm sure NV has the equivalent when it comes to hardware design. Surely in AI research too, but perhaps not in the same quantities.
Jensen is smart. He's gone through over 30 years of tech cycles.
Nvidia actively commoditizes the LLM models. Look at Nemotron. They've avoided making a SOTA model solely to keep the hyperscalers (aka crack addicts) coming back for more GPUs.
As soon as the bubble bursts, they can release some open weight NemoMambaDiffusiontron and keep folks buying GPUs to run the damn thing.
It still wouldn't be smart to do so, as this would fall into the common business pitfall of thinking you could easily do the next stack layer of work.
Well, classically, to capture more margin for yourself. In business school they call this Vertical Integration. Samsung did exactly this. AWS too.
Nvidia rushed some investments in both companies just before they went public and are now are just waiting to get paid.