It plays much worse and the HN discussion is anchored around whether it's OK to call it "human-level" or if the authors should have clarified that they meant a human who doesn't actually play table tennis. But it was accepted as being SOTA at that time.
What happened since then? This looks like the kind of level of advance we see in, say, coding AIs, but I thought physical robotics was advancing much more slowly.
A partial answer is that the new robot cheats in ways that DeepMind didn't seem to. It has high speed cameras all over the room and can detect spin by observing the logo on the ball. But I'm not sure this explains such a big advance.
Also IT'S TABLE TENNIS, NOT PING PONG!
Interestingly, for Youtube searches this is the other way, with a much bigger difference in favour to ping pong
Case in point : we're all expecting China needs to invade Taiwan soon, or they will run out of soldiers because of the one child policies of the 70s/80s.
Meanwhile, Ukraine is holding up against a "modern" army with quickly assembled drones.
So it all seems a bit like "they'll never put tanks through the Ardennes", sort of ?
Where and when will the first invasion of a country by a purely remote controlled, AI assisted army take place ?
Will robot battalions embed civilians to act as human shields ? Will the AI learn to mistreat the locals to maintain fear, or will they see it as a needless distraction and rush to the center of powers ?
If war is mostly played out from a disrance, will years of playing RTS give South Korea an edge ?
I kinda think that the competitions among the big dogs (US/Russia/China/etc.) would eventually green light ANY AI/Robots projects if they can justify tipping the scale somehow, and in the process completely destroys the last element of any political counter-weight. Because "fear gives men wings".
I would really hate to live in a dystopian world worse than what is described in the books/movies.
https://www.nature.com/articles/s41586-026-10338-5
I would love to see a video of this thing that shows the whole table. From the paper I guess they have to light the area very brightly. But it seems like a pretty serious set up.
And, like many AIs, it can have "jagged capability" gaps, with inhuman failure modes living in them - which humans can learn to exploit, but the robot wouldn't adapt to their exploitation because it doesn't learn continuously. Happened with various types of ML AIs designed to fight humans.
Now build a robot that can catch a bullet.
> Exactly! He was a machine designed to hit blerns. I mean come on, Wireless Joe was nothing but programmable bat on wheels.
> Oh? And I suppose Pitch-o-mat 5000 was just a modifier howitzer?
> Yep!