Google achieves AI ‘breakthrough’ by beating Go champion

_87961345_3dc22d91-a377-4e68-9893-0be32da498fa
DeepMind played with a full-sized board of 19 rows and 19 columns / Thinkstock

A Google artificial intelligence program has beaten the European champion of the board game Go.

The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.

The tech company’s DeepMind division said its software had beaten its human rival five games to nil.

One independent expert called it a breakthrough for AI with potentially far-reaching consequences.

The achievement was announced to coincide with the publication of a paper, in the scientific journal Nature, detailing the techniques used.

Earlier on Wednesday, Facebook’s chief executive had said its own AI project had been “getting close” to beating humans at Go.

But the research he referred to indicated its software was ranked only as an “advanced amateur” and not a “professional level” player.


What is Go?

Go is thought to date back to ancient China, several thousand years ago.

 Using black-and-white stones on a grid, players gain the upper hand by surrounding their opponents pieces with their own.

The rules are simpler than those of chess, but a player typically has a choice of 200 moves compared with about 20 in chess.

There are more possible positions in Go than atoms in the universe, according to DeepMind’s team.

It can be very difficult to determine who is winning, and many of the top human players rely on instinct.


DeepMind’s chief executive, Demis Hassabis, said its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans.

“It starts off by looking at professional games,” he said.

“It learns what patterns generally occur – what sort are good and what sort are bad. If you like, that’s the part of the program that learns the intuitive part of Go.

“It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes.

“The final step is known as the Monte Carlo Tree Search, which is really the planning stage.

“Now it has all the intuitive knowledge about which positions are good in Go, it can make long-range plans.”

Tested against rival Go-playing AIs, Google’s system won 499 out of 500 matches,

And last October, DeepMind invited Fan Hui, Europe’s top player, to its London office for a series of games, each of which the AI won.

“Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years,” Mr Hassabis said.

“The reasons it was quicker than people expected was the pace of the innovation going on with the underlying algorithms and also how much more potential you can get by combining different algorithms together.”

MORE of the story and 2 more images plus 3 VIDEOS / click image
TOP of PAGE