Will Google AI conquer the world’s toughest game Go?


After Deep Blue beat the world chess champion in 1997… and Watson conquered the Jeopardy game show in 2011… one human game still stood strong against AI: Go.

But today, Go is Going, Going…

Go has been an Artificial Intelligence “grand challenge” for a long time. With 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions, AI systems could never compete at top levels with simple “brute force” strategies.

There’s no way to preview the impact of every conceivable move.

To win, you need an ineffable sense of the whole board: something like human intuition, brilliantly refined. Even last year, many experts thought it would take at least another decade for an AI system to beat human world champions – maybe longer.

But Google’s AlphaGo just won the first two games in its best-of-five match against the past decade’s #1 player.

Lee Se-dol, who’s won 18 international championships, didn’t see this coming. Before the first match began, he told a press conference:

I believe human intuition and human senses are too advanced for artificial intelligence to catch up.

After absorbing defeat, he said:

I admit I am in shock… I couldn’t foresee that AlphaGo would play in such a perfect manner.

AlphaGo’s come a long way since it beat the European champion last fall. What’s making it so good, so fast? According to Google Deepmind’s team, since AlphaGo was activated, it has been learning at a geometric rate… no, stop, sorry: that’s Skynet.

Here’s how AlphaGo actually works…

First, Google built two neural networks: one to choose the next move, and the other to continually predict who’ll win based on current positions. Next, it trained these networks on roughly 100,000 human games, until it could predict human moves more than half the time.

Then, to go beyond mere “human” skills, it set up two slightly different versions of AlphaGo to play each other millions of times. As they battled, they learned from experience, identified new strategies, and gradually adjusted their own internal connections based on whatever worked best.

By last year, AlphaGo had won 499 out of 500 games against other computer Go systems. By October, it won five straight games against Fan Hui, Europe’s Go champion. Still, almost everyone agreed: Asia’s world champions would be much tougher to beat.

Game 1 was close and well played, but the result was the same: a triumph for the unperturbable, never-gets-tired AlphaGo.

As for game 2, it was a very similar story, with AlphaGo once again crowned the victor. A flustered Lee Se-dol tried to make sense of his defeat after the match, saying:

Yesterday I was surprised but today it’s more than that, I am quite speechless. Today I feel like AlphaGo played a nearly perfect game.

If you look at how the game was played I admit it was a clear loss on my part.

As we write, the match is far from over: you can follow it here. (And even watch the first two games on YouTube, if it’s a really slow day at work.) But after 2,500 years, Go’s human reign seems nearly done.

What does this mean to a non-Go player?

Well, Google envisions using the same machine learning techniques to take on complex scientific tasks such as modeling climate and disease. And, as Wired pointed out in January, this work is directly relevant to everything from robotics to Siri-style personal assistants and day trading… practically anything that can be modeled as a game, requiring strategy.

This isn’t an “IT” security story. But maybe it’s a “You” security story.

AI’s cognitive power keeps accelerating, and it’s becoming increasingly possible to simulate at least some forms of human intuition. Time to take our game to the next level, fellow human.

As the saying goes, soon many of us will either be telling a computer what to do, or vice versa.

Image of GO board courtesy of Shutterstock.com