Google’s DeepMind AI can now play table tennis at a competitive level
Before artificial intelligence takes over the world, it looks like it will first have to beat us at table tennis: Google has reported that a robot that uses DeepMind AI can now play ping-pong at a “human amateur level.”
The DeepMind Lab is where Google works on some of its most advanced AI technologies. We’ve seen plenty of hands-on DeepMind experiments before, from AI adding audio to silent videos to discovering new materials.
In a new paper, Google researchers report that their table tennis robot won 13 out of 29 matches against humans. The success rate varied depending on the level of players it competed with (from beginner to advanced).
“This is the first robot that can play a sport with humans at a human level. It is a milestone in learning and controlling robots,” the researchers write. However, they also say that this is only a small step forward in the broader field of having robots perform useful, practical skills.
Not so fast
Table tennis was invented by the DeepSpirit team because of the many different elements involved, from the complex physics of movement to the hand-eye coordination required to successfully return the ball.
The robot was trained by focusing on each specific stroke type individually, from backhand spin to forehand serve. This training was then combined with a more advanced algorithm designed to choose the required stroke type every time.
As you might expect, the robot struggled the most with faster shots (which gave the AI less time to think about what to do), and the researchers are already thinking about how to improve the system, including how to make it more unpredictable in its play.
There is even the built-in ability to learn from the strategies of the human opponent, and weigh their strengths and weaknesses. full article is definitely worth reading if you’re interested in the challenges of AI training and scaling, and how robots can develop the combination of skills needed to perform physical tasks.