Norman may be scary sounding, but he’s meant to show how AI can be bias when trained on bias information.
Blockbuster movies like I, Robot and Ex-Machina don’t exactly paint artificial intelligence and robots in the best light. The late Stephen Hawking wasn’t shy about his reservations regarding AI either. So, it’s probably better that he doesn’t know about MIT’s latest terrifying creation. Researchers at MIT created Norman, the world’s first “psychopathic artificial intelligence.” Named after Norman Bates from Alfred Hitchcock’s Psycho, the AI is meant to show how algorithms are made and to alert people to how AI’s aren’t inherently biased.
So how exactly did researchers make the AI “psycho?” They only fed Norman with descriptions and images of people dying from popular website Reddit. They then showed the AI Rorschach ink blot tests to understand what Norman saw and compare with traditionally trained AIs. Let’s just say Norman’s results aren’t pretty. For one test, the standard AI saw “a black and white photo of a baseball glove.” What did Norman see? A “man is murdered by machine gun in broad daylight.” In another test, the standard AI saw “a group of birds sitting on top of a tree branch.” Norman saw “a man is electrocuted and catches to death.”
It sounds terrifying, but MIT says Norman is supposed to show what happens when bias data is used in AI. In other words, some AI algorithms aren’t inherently biased, but they can be if you train it solely on bias data. It’s important to consider since there could be unintended consequences the system wasn’t designed to see and it can cause major problems. Remember those stories about Google Images being racist or Facebook news suppressing certain news stories? With this new research, the team at MIT hopes people will realize that problem isn’t the AI itself; it’s what’s being put into it.
This is one AI you don’t want to encounter. (Photo via MIT)
And we’ve seen examples of this before. In 2016, Microsoft launched Tay, a Twitter chat bot. According to a Microsoft spokesperson, Tay was meant to be a social, cultural, and technical experiment. But because the internet loves a challenge, Twitter users started to provoke the bot to say racist and inappropriate things. It worked. The more people said inappropriate things to Tay, the more the bot picked up the language from its users. It got so bad, Microsoft had to pull the bot offline.
But the MIT team is hopeful about Norman. They think it’s possible for the AI to retrain its way of thinking by learning from human feedback. On their site, they invite people to take the same inkblot test as Norman and share what they see. So far, they received more than 170,000 responses. But who’s to say some people won’t take it far and try to feed it inappropriate things similar to what happened with Tay? Guess we’ll have to wait and see how things turn out for Norman.
Have a story tip? Message me at: cabe(at)element14(dot)com


Top Comments