(Image credit Facebook Research)
Professor Stephen Hawking and Elon Musk are staunch opponents when it comes to the advancement of AI, feeling that it could lead to the downfall of the human race and maybe they’re right. Just read the back and forth communication from a pair of Facebook AI agents:
-Bob: “I can can I I everything else.”
-Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Clearly, this is cause for alarm as Bob looks to handle ‘everything else’ while Alice understandably has an aversion to ‘balls,' which I’m going to go ahead and blame on Bob. All kidding aside, the communication between the two bots is interesting in that they went beyond their programming to create their own language and it was cause enough to shut the programs down.
(Image credit Facebook Research)
So what exactly caused the pair to speak in gibberish?
It began with simple negotiation, or rather the ability to negotiate between a pair of dialog agents developed by researchers at FAIR (Facebook Artificial Intelligence Research). The two were designed to apply the art of negotiation to get the ‘best deal’ to any given situation in much the same fashion as you or I when taking on goals or running into conflicts.
Think of it as a bit of adversarial dog-fighting between two humans using any type of communication you want. In this case, the software wasn’t restricted to speaking English in the normal manner but began communicating with each other by doing so, only the platform apparently found it inefficient and began diverging the language into nonsensical word combinations, ultimately inventing code words that only they could understand. For example, imagine using the word ‘five’ ten times and translating that into meaning I want ten copies of that number, almost like a type of shorthand only in this case for AI. This is essentially what happened with Bob and Alice, only on an unknown level.
(Image credit Facebook Research)
Why was creating their own language deemed an issue and why was it terminated?
Simply put, we humans often have an issue with things we don’t understand, especially with languages we don’t understand. The same can be said for AI, while they may have no problem interpreting us, we would have no clue what they were communicating, and that is the problem. We wouldn’t know what they were communicating with each other much less to us, and for some, that’s frightening.
Facebook’s AI uses what’s known as multi-issue bargaining to plan their negotiating tactics with each shown a collection of items and are tasked at how to divide them equally between them. A value is then entered next to each item denoting how much each cares for them, however those values are not known to the opposing agent, much like in real life.
Once those parameters are set, the agents are then instructed to deal, and they go about trying to get the items that are valuable to them. The only thing- the researchers never set a reward system for the agents as an incentive to use proper English dialog in their endeavor and therefore created a much more efficient code (based on the English language) to talk with each other.
This created a problem for the researchers, not with fear of never understanding them but because the base software is used with other projects and therefore could undergo the same unexpected learning process and ruin valuable data. It was with that regard that they decided to bring the pair of agents back to using coherent English sentences when communicating and to look at how they could prevent it from affecting other projects that use the same AI platform.
Then why did humans freak out about AI learning a different language?
Two words: click bait. Considering most headlines about AI often include big name scientists who are against future AI advancements without safeguards, people see them as humanity’s doom. Bold letter headlines with the words ‘Facebook AI gets Shut Down after Developing It’s Own Language’ and you get the gist.
Have a story tip? Message me at: cabe(at)element14(dot)com


