Researchers from MIT and Boston University have developed an elegant way for humans to direct a robot’s movements by using “error-related potentials.” The research used a humanoid robot named Baxter from Rethink Robotics; a company headed by former MIT CSAIL director, and iRobot co-founder, Rodney Brooks. (via MIT CSAIL)
In past attempts to remotely control robotic movements with human thought (or speech), there was a learning curve, or a considerable amount of training needed, which ultimately made those methods less effective and less efficient. Researchers from Boston University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new method based on the identification of error-related potentials (ErrPs) in the brain. Past methods would require the individual to issue specific mental or verbal commands, but this team of researchers used an electroencephalogram (EEG) cap to detect ErrPs, which are present when an individual notices a mistake. EEG’s track brainwave patterns through electrodes that are placed on the scalp, and then the resulting electrical signals are sent to a computer to record the results. This technology is used in combination with their “machine-learning algorithms,” which can classify brain waves within about 10 to 30 milliseconds, and can be used to signal ErrPs.
Their research tests this method in a simple binary task in which a robot intends to place a can of paint in a basket marked either “Paint” or “Wire.” If the subject merely identifies that the robot is making a mistake, the system picks up on the ErrPs in their thoughts and subsequently corrects the machine's course of action. CSAIL Director Daniela Rus explains the process, saying that, “As you watch the robot, all you have to do is mentally agree or disagree with what it is doing. You don't have to train yourself to think in a certain way -- the machine adapts to you, and not the other way around.” The system’s adaptability to the wearer, rather than the wearer’s need to adapt to the technology, is precisely what makes this development so exciting. The learning curve is significantly reduced because the robot’s responsiveness is to much more simplistic signals, rather than complex verbal commands or specific intensive thought.
Although the task was simple, and the technological system is still in its infancy, this advancement has a lot of practical potential, according to Rus, in areas such as, “supervise[d] factory robots, driverless cars, and other technologies we haven’t even invented yet.” The research paper that presents their work was recently accepted to the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Robotics and Automation (ICRA) which will take place in Singapore in May. The next step(s) for this team of researchers are to approach more complex tasks that will hopefully lead them to develop increasingly intuitive ways for humans to interact with robots in the hopes that one day the possible practical and commercial applications of their research will materialize.
You can watch Baxter the robot in the video below, and the full PDF version of their research paper after this link.
Have a story tip? Message me at: cabe(at)element14(dot)com
http://twitter.com/Cabe_Atwell