AlphaGo against third consecutive victory in go-World




AlphaGo also won the third match against world champion Lee Sedol go. That means that Google Deep Mind with its artificial-intelligence program has won and has secured a million dollars in prize money, most of the go-pots.

The race in which the South Korean fan-world champion Lee stood up for the third time against Google’s artificial intelligence program ultimately took more than four hours. “I’m sorry that I do not have people to meet their expectations,” said Lee. He thinks it the pressure in the third round was too much. AlphaGo continued to perform well on the other hand, despite occurred situations that were not previously seen in round 1 and 2. The million dollar prize goes to various charities.

Despite the fact that AlphaGo has now won the human world, the remaining two rounds are still played the majority of go-rounds; There is no mercy rule. Although Lee is defeated by the deep learning system from Google Deep Mind, the South Korean would be able to prove that the program in principle to defeat a man. On Sunday 13 and Monday, March 14 played the final two rounds.


“Every day I have to rewrite my story, Lee had never expected him as a man could be defeated,” said Leo Dorst of the Faculty of physics, mathematics and computer science from the University of Amsterdam. Thirst said this during a meeting about the game between AlphaGo and Lee Sedol, on Thursday March 10th. For the second time lost Lee Thursday Googles deep learning system.

The contest between man and machine is great excitement among connoisseurs of the at first sight so simple game in artificial intelligence for a long time as hard to win was to book. “That’s exactly what go: simple but exciting Always new, simple and complicated reason, extra nice to nerds..” Says Thirst at the lecture hall packed with mostly students from the science faculty.

To indicate what AlphaGo performs Thirst tells how it is with people. “A talented child up to 15 years then three come.” Then a certain grade and indicates how good you are. Professional 9 than is the highest. “The difference between Fan Hui, who AlphaGo lost in October, and Lee Sedol is for ten years, eight hours a day. Lee is 33 years and Professor since he was twelve,” said Dorst. “He is another creative, because he invents new opening moves. Therefore, one also thought AlphaGo would be struggling. Lee Sedol is much stronger than Fan Hui. Everyone thought that this makers AlphaGo would undermine their own success. That is now placed in a somewhat different perspective. ”

Yet the man in a way also at a disadvantage during the match: Lee knows that there is at stake a million dollars and he knows that he is playing against a program. Also, there are normally three or four play days off between games. In this case there is only one rest day. Because AlphaGo also consider using a playing style that would not normally apply people, Lee can prepare less well in the next game. In the first game Lee played a somewhat unorthodox opening. This he seemed to do to test AlphaGo. He also made ​​use of play, something you would do according to the experts against weak players.

In the second game Lee played a so-called waiting game. If he does against people who lose. “It was predicted that you would need 10,000 GPUs to get Lee’s level. Anyone who thought AlphaGo would win would be declared insane, but it was still different. The fan community was first shocked, but after the second gain AlphaGo to hit it. People now think AlphaGo go will enrich. ”

Thirst is going to go back to the profit of AlphaGo to Fan Hui. Fan is Europe’s best go player with 2 than professional. He began to go in 1988. During the study which was published in January, AlphaGo lost two games of Fan and won eight. Beforehand it was agreed that certain game conditions would not be counted. Fan did better in the short practice parties, but not counted them.

There is a reasonable fear go online, ie cheaters. This is already a major problem in online chess. But this problem is not related according Thirst with the fact that computers are more powerful than men, but the man himself.

Artificial intelligence

Go is the game quite complicated by the sheer number of possible positions. “There are entire studies the game and go-games from the distant past that are still actively studied,” said Dorst. “In chess it is clear: you have to pick the King case go is not so clear, the goal is less tangible Without guidance is go for beginners than a difficult game Thus overpower other stones are not the primary objective, while…. kids especially think that this is the case as well. ”

“At go are actually the matching groups and the intermediate areas ‘documents’ in the game. A strong player knows which group is strong or weak and what the final score will be possible. For professionals, the overall difference in the final score small, just two points and that while you can get about 180 points per player on a board with 19 to 19 lines, “said Dorst. “For professionals takes place a game often slightly from 250. There has ever been a person who has calculated that the maximum number of moves can be 2×10⁴⁶ but it survives a human being.” A typical match between pros usually takes about five hours, though sometimes twenty hours clocked by a Japanese title match.

Go is anyway a game of big numbers. It is therefore practically impossible for a computer as in chess all possible moves from a certain position to calculate in advance. The AlphaGo-machine makes use of different machine learning elements. Max Welling, professor of machine learning from the University of Amsterdam, explains in a nutshell out according to the paper published in January how AlphaGo works.

“Although AlphaGo since the last time the computer played a champion is probably significantly changed, will not change much in the base,” says Welling. “To go players was clear: AlphaGo would not win, but the current status is different..”

AlphaGo uses four machine learning ingredients. Supervised deep learning , reinforcement learning and Monte Carlo Tree Search. Also makes the machine using deep convolutional networks to scan the board and recognizing image.

Learning predicting data from previous races, called supervised learning. In that case, there is an existing data set from which predictions are made. Is a wrong prediction, the algorithm needs to be adjusted a little bit, right up to the outcome.

The second process called reinforcement learning. In addition, the neural network itself conducts an action, such as placing a stone on a specific position on the board. Then it goes out if it would win or lose with that move. Is indeed won, the line can be improved. “However, such actions can be rather noisy,” says Welling.

Additionally AlhpaGo analyze games that people have done. How would a man? Where would a man drop a stone? Then there is a network that itself generates new games. That millions of games. That dataset late again learn a different network and train. The latter network is not concerned with the value of the stone, it looks at the value of the position. This AlphaGo trains himself from putting both human and personal.

“Eventually Monte Carlo Tree Search around the corner,” says Welling. “Every move has a value. In chess you can try every possible move within a certain time. Then the best possible move is chosen. When go there put too much potential.”

Yet AlphaGo sometimes plays all games from, as noted earlier. This does AlphaGo on ‘cheap’ way. If AlphaGo win or lose, will be put back to the point where it started and that repeated. The information that it produces is fed back into the Monte Carlo command.

Compared with the chess computer Deep Blue makes AlphaGo 1000 times less use of board evaluations than Deep Blue. Instead, it makes much more use of machine learning.

The entire college, including slides, can be seen via the UvA web lectures .


In: A Technology & Gadgets Asked By: [23234 Red Star Level]

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »

Star Points Scale

Earn points for Asking and Answering Questions!

Grey Sta Levelr [1 - 25 Grey Star Level]
Green Star Level [26 - 50 Green Star Level]
Blue Star Level [51 - 500 Blue Star Level]
Orange Star Level [501 - 5000 Orange Star Level]
Red Star Level [5001 - 25000 Red Star Level]
Black Star Level [25001+ Black Star Level]