Neil Lawrence 

Google AI versus the Go grandmaster – who is the real winner?

The achievement has been hailed as a breakthrough in artificial intelligence, but computers are much less efficient than us
  
  

A player places a black stone while his opponent waits to place a white one as they play Go
Google DeepMind’s AlphaGo victory has come much sooner than expected. Photograph: Cheryl Hatch/AP

Today we were greeted by the front page of Nature hailing a breakthrough in artificial intelligence: computers are now outperforming even the best humans at the Chinese game of Go, long been seen as the last preserve of human game-playing mastery. The breakthrough, from a team based at Google’s DeepMind group in London, has come much earlier than many experts expected.

The achievement is also being hailed as a breakthrough in understanding human intelligence, and a large step towards emulating it. However, so was Deep Blue’s achievement when it first beat chess world champion, Gary Kasparov, nearly 20 years ago. So where does this latest success really bring us?

The system that the DeepMind team developed is based on two main ideas: machine learning and random game-tree search. Searching a game tree is a way of exploring and evaluating possible future moves. A planning system for looking ahead in the game. Machine learning is a technique for training computers by showing them data: in this case board positions from the game. You train the computer to recognise good patterns on the board.

The computer is trained by making it play against itself, it can then learn from the games which board positions resulted in victory. To do this it had to play many millions of games. By the time it played against a human being it had played more games of Go than any human possibly could within their lifetime. This means the rate at which it learns to play is far slower than any human.

In the field of machine learning we refer to this as data efficiency. It refers to the amount of data that is required to solve a particular problem. Humans are incredibly data efficient, the recent breakthroughs in AI, are much less so.

In 1712, Thomas Newcomen developed a coal powered steam engine for pumping out mines. He was inspired by challenges in Cornish tin mining: deeper mines were susceptible to flooding and horse-driven pumps were only effective to a particular depth. In the end though Newcomen’s engine found far more use in coal mines. The reason was that it was so inefficient that it was only practical where there was a readily available source of coal.

In our modern minds, the name of James Watt is much more associated with the steam engine than Newcomen. Why? Watt made the steam engine more practical by the introduction of a separate condenser, instantly doubling its efficiency, rendering redundant mines workable and forming the blueprint by which all modern steam engines are designed.

So far, machine learning is missing its separate condenser moment. So is AlphaGo a breakthrough on the path to emulating human intelligence? I think of it more as a trig point. It’s certainly an important intermediate goal, a chance to map the landscape and take the odd photo, but it is just a stage on the journey and one that we already knew we would reach. We have however got there quicker than expected, and that is a natural cause for celebration.

To get weekly news analysis, job alerts and event notifications direct to your inbox, sign up free for Media & Tech Network membership.

All Guardian Media & Tech Network content is editorially independent except for pieces labelled “Paid for by” – find out more here.

 

Leave a Comment

Required fields are marked *

*

*