DeepMind’s AlphaGo recently beat Go world champion Ke Jie becoming the first machine to take the title. Ke Jie considers his next move against the Go playing AI (Photo from DeepMind)
In the latest battle of man vs. machine, machine came out on top once again. Google’s AlphaGo, an AI program developed by DeepMind to play the game Go, beat Go world champion Ke Jie for the second time giving it the lead in the three-part series. Technically, this means AlphaGo is the world’s best Go player – it has beaten two of the game’s biggest champions in under a year.
AlphaGo not only played the game but analyzed its opponent’s movements. According to the AI, Ke played “perfectly” for the first 50 turns, but as the game continued, the AI changed its strategy to beat Ke forcing the player to resign. DeepMind CEO Demis Hassabis said at the press conference that the first 100 moves was the closest they’ve ever seen someone play against the AI. After winning this match, Alphago is retiring from the competitive game scene.
Being the first computer program to defeat a professional Go player, AlphaGo has definitely made history. It first made major headlines in 2015 when it won against three time Go champion Fan ***. It took the glory with a 5-0 win; not bad for its first game against a professional human player. A year later, it faced off against Lee Sedol, who holds 18 world titles and is considered the greatest player of Go of the past decade. It was this match that earned AlphaGo a 9 dan ranking, the first time a computer Go player has ever earned the title.
The game of Go is often considered difficult and sophisticated for computers to win compared to other board games, like chess. Most of its difficulty comes from there being 10 to the power of 170 possible board configurations.
So, how did DeepMind come to create the current Go champion? The program uses an advance tree search along with deep neural networks. With this, it looks at the Go board as an input and puts it through various network layers which have millions of connections. From here it decides what the best move is to possibly win the game.
To train the program, researchers showed it numerous strong amateur games so it could develop an understanding of what human game play looks like. Once it got the hang of it, it played against different versions of itself thousands of times to help it learn from its mistakes and figure out where to make improvements.
AlphaGo may be retiring from gaming, but DeepMind isn’t ready to move on to something else just yet. The company wants to publish a final paper about the development of the AI since its match with Lee Sedol last year. They also want to use the program to help teach others how to play the complicated game.
With this victory, I am not worried about machines taking over as I am about machines taking even more jobs! But, that's progress for you.
Have a story tip? Message me at: cabe(at)element14(dot)com