A DeepMind researcher says AI is close to reaching human levels. (Image Credit: geralt/pixabay)
According to a lead researcher at Google’s DeepMind AI division, artificial intelligence is very close to reaching human levels. This artificial general intelligence (AGI) realization came to light after DeepMind revealed its new Gato AI performs various complex tasks, ranging from stacking blocks to writing poetry. Dr. Nando de Freitas wrote, “It’s all about scale now! The game is over!”
This announcement seems like a clickbait kind of statement.
“It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI,” Dr. Nando de Freitas said. He also believes that the Gato AI system has a long way to go before it passes a real Turing test.
Leaders in AI research suggested that AGI’s rise could lead to an existential crisis for humanity. Oxford University professor Nick Bostrom even said that a superintelligent system could become more dominant on Earth, replacing humans. The scary part is that a self-taught AGI system can also become smarter than humans and be impossible to power off.
Taking more questions on Twitter, Dr de Freitas said, “Safety is of paramount importance” when developing AGI. “It’s probably the biggest challenge we face,” he wrote. “Everyone should be thinking about it. Lack of enough diversity also worries me a lot.”
Google is currently developing a “big red button” to help curb any risks that may arise from an artificial intelligence boom. In 2016, DeepMind published a paper called “Safely Interruptible Agents,” outlining a framework to prevent advanced AI systems from ignoring shut-down commands. “Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the paper stated.
“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.”
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell