I hadn’t really appreciated before the connection between his chess and game industry experience and the early reinforcement learning work that put Deepmind on the map, e.g. the Atari game AI demos, AlphaGo, Alphazero, etc. There is a fascinating thread there and it’s certainly a case of the right person with the right mix of past experience and vision being able to pick exactly the right problems to focus on to move technology forward.
The book has a few flaws: it’s maybe a little too uncritical of its subject. But that’s almost a given with books of this kind where the author gets a lot of access.
Out of all the heads of AI orgs out there, Dennis is the best, but the book did him a disservice by painting an unrealistically sunny picture of him as some kind of visionary figure.
Like I already said, bias is inevitable in a book where the writer gets access (to the point of interviewing Hassabis in a North London pub every month), but the benefit to readers is that you do get a lot more insight into what makes the guy tick than you would in a book written by an outsider. I certainly learned a lot and just because I did doesn’t mean I’m buying into some cult of tech hero worship.
Guys hes just one smart guy who got placed in a good moment in the AI technological revolution- hes def not the second coming of Christ
I want to learn to think more like him. What differences between his way of thinking and mine create such a powerful gap? If I could understand those differences, I might also understand how to narrow that gap. And if we could identify the causes of that gap, perhaps humans in general could develop much further.
I truly envy his intelligence. When I read his writings, I can see fragments of knowledge that he cannot hide, and it makes me think: I want to become like that too.
I don't agree with everything he says, but he's obviously an enormously deeper thinker than the likes of Altman.
The main problem is that in capitalism private companies have only the mission to serve their shareholders/owners.
Public institutions have the mission to serve the public.
The only real solution is to make AI a public good/utility which should be regulated on an international level and overseen by trustworthy institutions.
The reason media training exists is to win over people like you.
I know people high up at OpenAI, I'm quite sure Altman etc. don't care either; the only difference is, they've been trained well to hide it.
There is a precedent for this in nuclear weapons. It did not work. All it takes is a sufficiently resourced nation-state to defect from whatever agreements there are and the whole thing collapses. If the incentives point toward doing so, it is an inevitable outcome.