I say this not because I think his prediction as stated here is necessarily wrong or unreasonable, but because I myself might want to make investment decisions based upon this prediction, and translating a prediction about the future into the correct executions today is not trivial.
Without addressing his argument about AGI-from-LLMs - because I don't have any better information myself than listening to Sutskever on Dwarkesh's podcast - I am somewhat skeptical that the current market price of AI-related assets is actually pricing in a "60-80%" chance of AGI from LLMs specifically, rather than all the useful applications of LLMs that are not AGI. But this isn't a prediction I'm very confident in myself.
> I’ve listened to the optimists—the researchers and executives claiming [...]
Actually researchers close to the problem are the first ones to give farther out target dates. And Yann LeCun is very vocal about LLMs being a dead end.
And, that's why there's so much investment. It's more of a "when" question, not an "if" question (although I have seen people claim that only meat can think).
LLMs have shown us just how easily we are fooled.
AGI has shown us just how little we understand about "intelligence".
Standby for more of the same.
I think all the hype is more about ai replacing human effort in more ambiguous tasks than computers helped with before.
A more interesting idea would be - what would the world do with AGI anyway?
That's for regular human-level AGI. The issue becomes more start for ASI, artificial super intelligence. If the AI employee is smarter than most, if not all humans, why hire humans at all?
Of course, this is all theoretical. We don't have the technology yet, and have no idea what it would even cost if/when we reach that.
That is how it is done today. One asks one's philosophical priors what one's experiments must find.