AlphaGo: using machine learning to master the ancient game of Go

The game of Go originated in China more than 2,500 years ago. Confucius wrote about the game, and it is considered one of the four essential arts required of any true Chinese scholar. Played by more than 40 million people worldwide, the rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture the opponent's stones or surround ...Read the full article

Google DeepMind: Ground-breaking AlphaGo masters the game of Go

In a paper published in Nature on 28th January 2016, we describe a new approach to computer Go. This is the first time ever that a computer program “AlphaGo” has defeated a human professional player. The game of Go is widely viewed as an unsolved “grand challenge” for artificial intelligence. Games are a great testing ground for inventing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. The first classic game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952. But until now, one game has thwarted A.I. researchers: the ancient game of Go. Despite decades of work, the strongest computer Go programs only played at the level of human amateurs. AlphaGo has won over 99% of games against the strongest other computer Go programs. It also defeated the human European champion by 5-0 in tournament games, a feat previously believed to be at least a decade away. In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol—the top Go player in the world over the past decade This video tells the story so far... With Demis Hassabis, Google DeepMind Deep Blue photo credit courtesy of International Business Machines Corporation, © International Business Machines Corporation.