IBM CEO says there is 'no way' spending on AI data centers will pay off
37 points
41 minutes ago
| 12 comments
| businessinsider.com
| HN
Octoth0rpe
5 minutes ago
[-]
> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said

This doesn't seem correct to me, or at least is built on several shaky assumptions. One would have to 'refill' your hardware if:

- AI accelerator cards all start dying around the 5 year mark, which is possible given the heat density/cooling needs, but doesn't seem all that likely.

- Technology advances such that only the absolute newest cards can be used to run _any_ model profitably, which only seems likely if we see some pretty radical advances in efficiency. Otherwise, it seems like assuming your hardware is stable after 5 years of burn in, you could continue to run older models on that hardware at only the cost of the floorspace/power. Maybe you need new cards for new models for some reason (maybe a new fp format that only new cards support? some magic amount of ram? etc), but it seems like there may be room for revenue via older/less capable models at a discounted rate.

reply
mcculley
1 minute ago
[-]
But if your competitor is running newer chips that consume less power per operation, aren't you forced to upgrade as well and dispose of the old hardware?
reply
criddell
1 minute ago
[-]
[delayed]
reply
scroot
1 minute ago
[-]
As an elder millennial, I just don't know what to say. That a once in a generation allocation of capital should go towards...whatever this all will be, is certainly tragic given current state of the world and its problems. Can't help but see it as the latest in a lifelong series of baffling high stakes decisions of dubious social benefit that have necessarily global consequences.
reply
bluGill
9 minutes ago
[-]
I question depreciation. those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them is an open question. cpu's stopped getting exponetially faster 20 years ago, (they are faster but not the jumps the 1990s got)
reply
rlpb
6 minutes ago
[-]
> those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them is an open question

Doesn't one follow from the other? If newer GPUs aren't worth an upgrade, then surely the old ones aren't obsolete by definition.

reply
lo_zamoyski
5 minutes ago
[-]
> those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them

Then they won't be obsolete.

reply
maxglute
2 minutes ago
[-]
How long can ai gpus stretch? Optmistic 10 years and we're still looking at 400b+ profit to cover interests. The factor in silicon is closer to tulips than rail or fiber in terms of depreciated assets.
reply
myaccountonhn
5 minutes ago
[-]
> In an October letter to the White House's Office of Science and Technology Policy, OpenAI CEO Sam Altman recommended that the US add 100 gigawatts in energy capacity every year.

> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said.

And people think the climate concerns of AI are overblown. Currently US has ~1300 GW of energy capacity. That's a huge increase each year.

reply
kenjackson
15 minutes ago
[-]
I don't understand the math about how we compute $80b for a gigawatt datacenter. What's the costs in that $80b? I literally don't understand how to get to that number -- I'm not questioning its validity. What percent is power consumption, versus land cost, versus building and infrastructure, versus GPU, versus people, etc...
reply
wmf
10 minutes ago
[-]
reply
georgeecollins
7 minutes ago
[-]
First, I think it's $80b per 100 GW datacenter. The way you figure that out is a GPU costs $x and consumes y power. The $x is pretty well known, for example an H100 costs $25-30k and uses 350-700 watts (that's from Gemini and I didn't check my work). You add an infrastructure (i) cost to the GPU cost, but that should be pretty small, like 10% or less.

So a 1 gigawatt data center uses n chips, where yn = 1 GW. It costs = xi*n.

I am not an expert so correct me please!

reply
verdverm
35 minutes ago
[-]
IBM CEO is steering a broken ship and it's not improved course, not someone who's words you should take seriously.

1. The missed the AI wave (hired me to teach watson law only to lay me off 5 wks later, one cause of the serious talent issues over there)

2. They bought most of their data center (companies), they have no idea about building and operating one, not at the scale the "competitors" are operating at

reply
observationist
6 minutes ago
[-]
IBM CEO has sour grapes.

IBM's HPC products were enterprise oriented slop products banked on their reputation, and the ROI torched their credibility when compute costs started getting taken seriously. Watson and other products got smeared into kafkaesque arbitrary branding for other product suites, and they were nearly all painful garbage - mobile device management standing out as a particularly grotesque system to use. Now, IBM lacks any legitimate competitive edge in any of the bajillion markets they tried to target, no credibility in any of their former flagship domains, and nearly every one of their products is hot garbage that costs too much, often by orders of magnitude, compared to similar functionality you can get from things like open source or even free software offered and serviced by other companies. They blew a ton of money on HPC before there was any legitimate reason to do so. Watson on Jeopardy was probably the last legitimately impressive thing they did, and all of their tech and expertise has been outclassed since.

reply
duxup
20 minutes ago
[-]
Is his math wrong?
reply
nabla9
22 minutes ago
[-]
Everyone should read his argument carefully. Ponder them in silence and accept or reject them in based on the strength of the arguments.
reply
scarmig
4 minutes ago
[-]
His argument follows almost directly, and trivially, from his central premise: a 0% or 1% chance of reaching AGI.

Yeah, if you assume technology will stagnate over the next decade and AGI is essentially impossible, these investments will not be profitable. Sam Altman himself wouldn't dispute that. But it's a controversial premise, and one that there's no particular reason to think that the... CEO of IBM would have any insight into.

reply
nyc_data_geek1
7 minutes ago
[-]
IBM can be a hot mess, and the CEO may not be wrong about this. These things are not mutually exclusive.
reply
malux85
21 minutes ago
[-]
Sorry that happened to you, I have been there too,

When a company is hiring and laying off like that it’s a serious red flag, the one that did that to me is dead now

reply
parapatelsukh
6 minutes ago
[-]
The spending will be more than paid off since the taxpayer is the lender of last resort There's too many funny names in the investors / creditors a lot of mountains in germany and similar ya know
reply
bluGill
12 minutes ago
[-]
This is likely correct overall, but it can still pay off in specific cases. However those are not blind investments they are targeted with a planned business model
reply
wmf
13 minutes ago
[-]
$8T may be too big of an estimate. Sure you can take OpenAI's $1.4T and multiply it by N but the other labs do not spend as much as OpenAI.
reply
qwertyuiop_
9 minutes ago
[-]
The question no one seems to be answering is what would be the EOL for these newer GPUs that are being churned out of NVDIA ? What % annual capital expenditures is refresh of GPUs. Will they be perpetually replaced as NVIDIA comes up with newer architectures and the AI companies chase the proverbial lure ?
reply