Why I'm Betting Against the AGI Hype
28 points
6 hours ago
| 5 comments
| notesfromthecircus.com
| HN
JuniperMesos
5 hours ago
[-]
Interesting that in an article entitled "Why I'm betting against AGI hype", the author doesn't actually say what bet he is making - i.e. what specific decisions is he making, based on his prediction that AGI is much less likely to arise from LLMs than the probability the market is implicitly pricing in suggests. What assets is he investing in or shorting? What life decisions is he making differently than he otherwise would?

I say this not because I think his prediction as stated here is necessarily wrong or unreasonable, but because I myself might want to make investment decisions based upon this prediction, and translating a prediction about the future into the correct executions today is not trivial.

Without addressing his argument about AGI-from-LLMs - because I don't have any better information myself than listening to Sutskever on Dwarkesh's podcast - I am somewhat skeptical that the current market price of AI-related assets is actually pricing in a "60-80%" chance of AGI from LLMs specifically, rather than all the useful applications of LLMs that are not AGI. But this isn't a prediction I'm very confident in myself.

reply
karmakaze
5 hours ago
[-]
Armchair commentary.

> I’ve listened to the optimists—the researchers and executives claiming [...]

Actually researchers close to the problem are the first ones to give farther out target dates. And Yann LeCun is very vocal about LLMs being a dead end.

reply
nomel
2 hours ago
[-]
> farther out target dates

And, that's why there's so much investment. It's more of a "when" question, not an "if" question (although I have seen people claim that only meat can think).

reply
arisAlexis
1 hour ago
[-]
Same guy that predicted LLMs couldn't do something in 5000 years and they did it next year? (Google this, seriously)
reply
klysm
3 hours ago
[-]
He is starting a business that depends on them being a dead end
reply
techblueberry
3 hours ago
[-]
Sounds like he’s putting his money where his mouth is.
reply
drpixie
1 hour ago
[-]
Summary of the current situation...

LLMs have shown us just how easily we are fooled.

AGI has shown us just how little we understand about "intelligence".

Standby for more of the same.

reply
m463
2 hours ago
[-]
I don't think there's a lot of "AGI hype".

I think all the hype is more about ai replacing human effort in more ambiguous tasks than computers helped with before.

A more interesting idea would be - what would the world do with AGI anyway?

reply
arisAlexis
59 minutes ago
[-]
Can't you think what a world with a species smarter than humans could be like? Yeah, it's difficult
reply
fragmede
1 hour ago
[-]
Hire digital employees rather than human ones. When all your interaction is digital, replacing the human on the other end with a theoretically just as capable AI is one possibility. Then, have the AI write docs for your AI employee, spin up additional employees like EC2 instances on AWS. Spin up 30 to clear out your Trello/Monday.com/Jira board, then spin them back down as soon as they've finished, with no remorse, because they're just AI robots. That's what you could do with such a technology anyway.

That's for regular human-level AGI. The issue becomes more start for ASI, artificial super intelligence. If the AI employee is smarter than most, if not all humans, why hire humans at all?

Of course, this is all theoretical. We don't have the technology yet, and have no idea what it would even cost if/when we reach that.

reply
FrankWilhoit
5 hours ago
[-]
"...a philosophical confusion about the nature of intelligence itself...."

That is how it is done today. One asks one's philosophical priors what one's experiments must find.

reply
arisAlexis
1 hour ago
[-]
Contrarianism as a mental property of humans
reply