Self-aware robots may seem like something you'd expect out of a science fiction film, but that seems more likely to become reality. Engineering researchers from Columbia have made stellar progress in robotics by creating a robot with self-awareness. Kind of like a conscience, except with no knowledge of physics, geometry or motor dynamics. In their demonstration, the robot doesn't know what it is exactly and doesn't know what its shape is. After a short period of talking and a day of intensive computing, the robot creates a self-simulation. It later uses the simulator to adapt to different situations, handling unknown tasks and finding and repairing damage in its body.
A video showing the robotic arm in use. (Video Credit: Columbia Engineering)
Robots have always been operated by human touch and having it modeled by humans. If robots are to become independent, then they must learn to simulate themselves. A free-moving articulated robotic arm was used during the study. The robot was able to move around at random and collected one thousand trajectories, each trajectory being made up of a hundred points. After this was complete, the robot used deep learning to create a self-model. The first versions of the self-model were not very accurate, and it didn't know what it was or how the joints were connected.
After training for less than 35 hours, the self-model looked much closer to the actual, physical robot by four centimeters. The self-model performed a pick-up and place task in a closed loop system, enabling the robot to go back to its starting point. With the closed-loop control, the robot grabbed different objects at different locations on the ground and was able to place them in a different place with no faults or errors. An open loop system is a bit different because it involves performing a task based on the self-model. The robot was able to complete a given task in an open loop system with just a 44-percent success rate.
The robotic arm performing all assigned tasks. (Image via Columbia Engineering)
Other tasks were performed by the self-modeling robot, like writing words down with a marker. A test was also run on the robot to determine whether or not it could detect damage on itself. To achieve this, researchers 3D-printed a damaged part, and the robot was able to see where the change had taken place. After doing so, the robot re-trained its self-model and was able to pick up where it left off on its tasks with little performance issues. Self-imaging is crucial to allow robots to take the next step in moving on from "narrow-AI" towards more general abilities.
Self-awareness in robotics may have some startling implications, however. It could lead to more adaptive and resilient systems, potentially to a loss of control. Researchers are now experimenting on robots to see if they can also model their own minds, making them think about thinking.
Have a story tip? Message me at: cabe(at)element14(dot)com
http://twitter.com/Cabe_Atwell