(via Learning to Select and Generalize Striking Movements in Robot Table Tennis)
Artificial intelligence may seem like a fantasy far into the future, but simple forms of artificial learning are already possible, and are quite formidable. Technical University of Darmstadt (TUD), in Germany have taught a robot the game of ping-pong, and it does not disappoint.
This game can be difficult for most humans as it requires high levels of hand eye coordination and learning hitting positioning and technique. The TUD team recognized that for a robot to learn to play table tennis, it would be nearly impossible to program all the possible movements to hit the ball coming from any direction. Instead, they applied a method called Mixture of Motor Primitives (MoMP) to allow the robot to find its own playing style.
MoMP was put to use by first programming a set of 25 basic hitting motions to the robotic arm, this made up the robot’s dynamic system motor primitives (DMPs). Depending on the objective, the robotic arms present state and camera information (about its success hitting the ball in the desired place), the robot can generalize the DMPs to try to find what works best in a wider set of situations.
The TUD robot is composed of a Varret WAM arm with seven DoF. Four Prosilica Gigabit GE640C cameras that use an extended Kalman filter for accurate and precise tracking of the ball at a 60Hz sampling rate. The bot factors information about gravity but the team has not yet been able to factor spin. Still, the robot can cover around 1m^2 of playing area as it hangs from the ceiling.
To teach the robot, researchers performed three experiments. The first put the bot against a ball launcher that gave it easy lobs to practice its DMPs. After this, MoMP was put to use as the ball launcher was aimed at a spot that was previously unreachable by the TUD robot’s pre-programmed movements. After 60 trials, the robot was already hitting 79% of the balls and the team noticed that the bot relocated some of the primitive movements and got rid of others it did not need.
(via Learning to Select and Generalize Striking Movements in Robot Table Tennis)
Lastly, the robot met its human match. After only an hour of play, the robot was hitting 88% of all the balls lobbed by its human adversary. How good was the human player though, they could have skewed the results?
The TUD team will be delivering their findings at the AAAI symposium in Arlington, Virginia in November, which will be focused on this type of guided and complex robotic learning.
Cabe