University of Albany's robot in action
It was proven in 1997 that a computer’s computational abilities could outsmart a human when super-computer Deep Blue beat professional chess player Gary Kasparov at his own game. This match put the human brain against the software programming capabilities of a team of developers, but Deep Blue still needed human assistance in physically moving the chess pieces.
14 years later, roboticists hope to grant computers like Deep Blue the ability of playing an entire game of chess independent of human assistance. In 2010, a competition was held at the annual Association for the Advancement of Artificial Intelligence Conference in San Francisco. Roboticists from many places and universities presented robots that resembled those on an automotive assembly line.
Their robots ran into trouble trying to identify and then properly move the game pieces in accordance to the game rules. Some robots used cameras to locate pieces, but none were programmed to identify them. Instead, memory of the initial position of each piece indicated which piece it was and how it could legally be moved around the chessboard. Despite all methods of movement, all had a tough time clearly identifying what moves were made and where exactly the pieces were placed. Furthermore, they were slow in making their moves, which only took them milliseconds to process and decide.
The winning robot was "Maxwell" from the University of Albany. The robot moved along its side and probably made generous use of its mobility to clearly see moves and piece placement from different angles.
While a robot’s arm-camera coordination still pales in comparison to a human’s hand-eye coordination, it is pertinent to note we developed those skills over hundreds of thousands of years. Roboticists and programmers have only been working on developing these skills in robots for only a few decades.
Cabe