Robots usually don’t have a sense of morality when it’s engaged in its everyday tasks. Just look at the Roomba, which can potentially destroy carpets or the Predator drones that destroy everything else. But what if we could give them ethics even though they would be artificial in a sense? That’s what Professor Susan Anderson and husband/research partner Michael Anderson of the University of Connecticut have accomplished on a limited scale with their robot called Nao. Based off of a new field of study called ‘Field of Machine Ethics,’ the team have successfully combined artificial intelligence techniques with ethical theory to incorporate Nao with a decision making process to determine what course of action it should take in a given situation. Susan and Michael used information from specific ethical problems, made available to them by ethicists, to help Nao learn the difference between right and wrong. As we rely more and more on help from our synthetic friends it will become crucial that they not unintentionally harm us. “There are machines out there that are already doing things that have ethical import, such as automatic cash withdrawal machines, and many others in the development stages, such as cars that can drive themselves and eldercare robots,” said Susan. With this new ethical programming will we humans finally have an ally in the upcoming robot apocalypse or will the robots simply be gracious as they terminate our existence?
Eavesdropper