Chess computer programs have been beating our grandmasters since 1997. One game, though, still eludes artificial intelligence. This is the ancient Chinese game of Go. Despite its incredibly simple rules, master Go players still destroy our best computers.
Is man’s dominance temporary? Or will Go always be our territory?
The exploration of why humans still win at Go, and whether or not they always will, promotes discussion about game theory, computation and human psychology.
First, let’s talk about the rules. As 1894 World Chess Champion Emanuel Lasker said, “The rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go.” Here we go:
- Black and white alternate placing their respective stones on the board. Black goes first.
- Stones may be placed alone or adjacent to other stones. Sets of adjacent stones connect to form a “group.” Note that stones are only connected if they are orthogonal to each other; diagonal doesn’t count.
- Each individual stone or group is bordered by free, unoccupied spaces. These unoccupied spaces are called liberties. Note that, once again, only orthogonal empty spaces count as liberties. For example, a single stone has four liberties, a group of two stones has six, and so on.
- As soon as a group has no liberties, it’s captured. In other words, when black places a stone that occupies the last remaining liberty of a white group, that white group is captured. The territory once occupied by white now belongs to black. And vice versa.
- At the end of the game, the player controlling the most territory wins. And that’s about it!
Despite Go’s apparent simplicity, computers still struggle. Computer program Zen19D is currently rated 6 dan (19×19 board, 15 seconds/move) on the Kiseido Go Server, an online Go venue. This corresponds to an elo rating of roughly 2600—a great rating—but no match for KGS’s 9 dan professionals (2900 elo). In comparison, computer chess titan Rybka weighs in at 3200 elo.
In computer Go exhibitions, the programs are given handicaps of three or more stones, but still usually lose to master players.
Why is Go so difficult to program?
The number of possible moves is very high. In any given chess position, the amount of legal moves is usually no more than 30. In a 19×19 Go board, however, the number of legal moves is much higher. After the nth move, there are (with some exceptions) 192 – 2n moves to choose from (note that in one move, both players take a turn). This number is much greater than 30 initially, and will generally remain over 30 even in the endgame.
The result of this is that a combinatorial explosion is reached much faster in Go than in chess. Assume that a game tree can handle 1012 children (no pruning). In chess, this capacity would be reached in log900(1012) ≈ 4 moves. In Go, this capacity is reached in 2.3. So, the Go game tree expands nearly twice as fast. Even with a perfect positional evaluator, a Go engine can only look half as far ahead.
Go positions are difficult to evaluate. Building a chess evaluation algorithm is relatively straight-forward. Firstly, position is evaluated based on material: 1 point for pawns, 3 for bishops and knights, 5 for rooks, 9 for queens. Next, adjustments are made for positional factors. Double bishops or rooks on open files may add half a pawn or so to the evaluation score. Stacked or isolated pawns may detract from it. Of course, choosing exactly how much to add or detract isn’t trivial, but these numbers can ultimately be tweaked to near-perfection.
Go positions, on the other hand, are extremely difficult to evaluate. Even determining whether a group is alive or not can be very difficult (dead groups cannot avoid capture, even with perfect play). More difficult than that is determining where one might place new stones to establish new living groups. Tougher still is the evaluation of abstract positional factors included within the evaluation protocol, whatever those factors may be.
Alpha-beta and other pruning schemes can use the positional evaluator to prune bad branches, thus drastically reducing the branching factor. However, given the difficulty of evaluation, many good branches might be accidentally pruned. Thus the computer might miss strong moves and be easily overpowered. Any attempt at creating a pruning scheme risks severely reducing program strength.
Go strategy is incredibly abstract. Many argue that, in general, the strategies behind Go are significantly more qualitative and mysterious than those of chess. One might concentrate their stones in one corner, firmly establishing territory there. Alternatively, one might scatter stones across the board, attempting to gain wider reach but risking full-scale positional collapse. Neither strategy is necessarily better. One might attempt to capture “low hanging fruit” but risk that, by chasing just a few stones, he loses valuable time while his opponent builds strong territory elsewhere. One might design an evaluator that follows one strategy or the other, but computers are generally incapable of spontaneously choosing which strategy to use.
Even if an evaluator does succeed in evaluating position based on a certain strategy, who’s to say that that strategy is best?
It’s pretty clear that the methodology that works well for computer chess—alpha-beta pruning with a rule-based positional evaluator—isn’t quite sufficient for Go. Something more is required. One promising solution is the use of neural networks and genetic algorithms to permit machine learning. In these programs, the evaluation protocol is changed at random, and then all the instances of the algorithm play each other many times. The algorithm with the highest elo rating in the end is chosen. Thus the computer actually teaches itself the best strategies for Go play.
Indeed, Go programs are getting better every year. It seems that computer Go today, like computer chess a few decades ago, marches inexorably towards dominance over man. After all, in the 1970s people debated whether computers would ever beat masters at chess. We’ve long since learned the answer to that question.
Still, computer Go seems to be a more daunting exercise. And it’s almost tempting to believe that the difficulties listed earlier aren’t surmountable; that the human mind possess some quality that can’t be reproduced in a computer, and that this quality is necessary for success at Go. Perhaps Go will always be our domain, forever beyond the grasp of the machine. And though this may constitute wishful thinking, one must admit that if there ever was a game that could forever thwart computation, that game would be Go.
References and further reading:
* The famous Blood-Vomiting Game was played between Honinbo Jowa and Intetsu Akaboshi, masters from prestigious rival Go houses. Marked are Jowa’s three “ghost moves,” moves so brilliant he claimed to have heard them from ghosts. After his loss, 25-year-old Go prodigy Akaboshi is said to have vomited blood onto the board, and died just months later.