Rise of the Silicon Dynasty

blood vomit

The final position of the famous “Blood-Vomiting Game” of 1835 *

Chess computer programs have been beating our grandmasters since 1997. One game, though, still eludes artificial intelligence. This is the ancient Chinese game of Go. Despite its incredibly simple rules, master Go players still destroy our best computers.

Is man’s dominance temporary? Or will Go always be our territory?

The exploration of why humans still win at Go, and whether or not they always will, promotes discussion about game theory, computation and human psychology.

First, let’s talk about the rules.  As 1894 World Chess Champion Emanuel Lasker said, “The rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go.”  Here we go:

  1. Black and white alternate placing their respective stones on the board. Black goes first.
  2. Stones may be placed alone or adjacent to other stones. Sets of adjacent stones connect to form a “group.” Note that stones are only connected if they are orthogonal to each other; diagonal doesn’t count.
  3. Each individual stone or group is bordered by free, unoccupied spaces. These unoccupied spaces are called liberties. Note that, once again, only orthogonal empty spaces count as liberties.  For example, a single stone has four liberties, a group of two stones has six, and so on.
  4. As soon as a group has no liberties, it’s captured. In other words, when black places a stone that occupies the last remaining liberty of a white group, that white group is captured.  The territory once occupied by white now belongs to black.  And vice versa.
  5. At the end of the game, the player controlling the most territory wins. And that’s about it!

Despite Go’s apparent simplicity, computers still struggle. Computer program Zen19D is currently rated 6 dan (19×19 board, 15 seconds/move) on the Kiseido Go Server, an online Go venue. This corresponds to an elo rating of roughly 2600—a great rating—but no match for KGS’s 9 dan professionals (2900 elo). In comparison, computer chess titan Rybka weighs in at 3200 elo.

In computer Go exhibitions, the programs are given handicaps of three or more stones, but still usually lose to master players.

Why is Go so difficult to program?

The number of possible moves is very high. In any given chess position, the amount of legal moves is usually no more than 30.  In a 19×19 Go board, however, the number of legal moves is much higher. After the nth move, there are (with some exceptions) 192 – 2n moves to choose from (note that in one move, both players take a turn).  This number is much greater than 30 initially, and will generally remain over 30 even in the endgame.

The result of this is that a combinatorial explosion is reached much faster in Go than in chess. Assume that a game tree can handle 1012 children (no pruning). In chess, this capacity would be reached in log900(1012) ≈ 4 moves. In Go, this capacity is reached in 2.3. So, the Go game tree expands nearly twice as fast. Even with a perfect positional evaluator, a Go engine can only look half as far ahead.

Go positions are difficult to evaluate. Building a chess evaluation algorithm is relatively straight-forward. Firstly, position is evaluated based on material: 1 point for pawns, 3 for bishops and knights, 5 for rooks, 9 for queens.  Next, adjustments are made for positional factors. Double bishops or rooks on open files may add half a pawn or so to the evaluation score. Stacked or isolated pawns may detract from it.  Of course, choosing exactly how much to add or detract isn’t trivial, but these numbers can ultimately be tweaked to near-perfection.

Go positions, on the other hand, are extremely difficult to evaluate. Even determining whether a group is alive or not can be very difficult (dead groups cannot avoid capture, even with perfect play).  More difficult than that is determining where one might place new stones to establish new living groups. Tougher still is the evaluation of abstract positional factors included within the evaluation protocol, whatever those factors may be.

Alpha-beta and other pruning schemes can use the positional evaluator to prune bad branches, thus drastically reducing the branching factor.  However, given the difficulty of evaluation, many good branches might be accidentally pruned.  Thus the computer might miss strong moves and be easily overpowered. Any attempt at creating a pruning scheme risks severely reducing program strength.

Go strategy is incredibly abstract. Many argue that, in general, the strategies behind Go are significantly more qualitative and mysterious than those of chess. One might concentrate their stones in one corner, firmly establishing territory there.  Alternatively, one might scatter stones across the board, attempting to gain wider reach but risking full-scale positional collapse. Neither strategy is necessarily better. One might attempt to capture “low hanging fruit” but risk that, by chasing just a few stones, he loses valuable time while his opponent builds strong territory elsewhere. One might design an evaluator that follows one strategy or the other, but computers are generally incapable of spontaneously choosing which strategy to use.

Even if an evaluator does succeed in evaluating position based on a certain strategy, who’s to say that that strategy is best?

It’s pretty clear that the methodology that works well for computer chess—alpha-beta pruning with a rule-based positional evaluator—isn’t quite sufficient for Go. Something more is required.  One promising solution is the use of neural networks and genetic algorithms to permit machine learning.  In these programs, the evaluation protocol is changed at random, and then all the instances of the algorithm play each other many times. The algorithm with the highest elo rating in the end is chosen. Thus the computer actually teaches itself the best strategies for Go play.

Indeed, Go programs are getting better every year.  It seems that computer Go today, like computer chess a few decades ago, marches inexorably towards dominance over man.  After all, in the 1970s people debated whether computers would ever beat masters at chess.  We’ve long since learned the answer to that question.

Still, computer Go seems to be a more daunting exercise.  And it’s almost tempting to believe that the difficulties listed earlier aren’t surmountable; that the human mind possess some quality that can’t be reproduced in a computer, and that this quality is necessary for success at Go.  Perhaps Go will always be our domain, forever beyond the grasp of the machine. And though this may constitute wishful thinking, one must admit that if there ever was a game that could forever thwart computation, that game would be Go.

References and further reading:

  1. A beginner’s guide to Go
  2. More information on Computer Go

___________________________________

Footnotes

* The famous Blood-Vomiting Game was played between Honinbo Jowa and Intetsu Akaboshi, masters from prestigious rival Go houses. Marked are Jowa’s three “ghost moves,” moves so brilliant he claimed to have heard them from ghosts. After his loss, 25-year-old Go prodigy Akaboshi is said to have vomited blood onto the board, and died just months later.

Advertisements

3 comments on “Rise of the Silicon Dynasty

  1. Richard says:

    If neural networks are being used to make machines better at go, than isn’t it a bit bizarre to continue to think that machines won’t eventually match the human brain in Go? After all, some think that the brain itself just a type of complex neural network. Whereas the idea that the brain/mind is a computer in the older sense of a linearly computing Turing machine is less credited nowadays. There is reason to believe that Neural Networks are super-Turing machines, capable of computational feats that Turing machines cannot complete, feats, for example, like “Supertasks” – a fertile area of research in philosophy. Supertasks are tasks involving a denumerable infinity of discrete tasks being performed in a finite amount of time. Hypercomputation is a branch of theoretical computer science which studies types of computation not normally achievable by traditional Turing machines. Neural networks, with idealized infinite capacities, supposedly can perform these hypercomputations. Probably, a neural network with finite capacities, like the brain, is capable of things a Turing machine with finite capacities, like a computer, is not.

    • Ben says:

      The (idealized) Turing Machine does have infinite capacity — an infinite tape, if you will. (The machine operates on a tape of symbols which extends infinitely on each side.) Despite the fact that physical computers have finite memory, Turing machines have proven to be fruitful models nonetheless.

      Though neural networks are certainly quite powerful — especially in the field of machine learning (see this) — I’m not sure that they’re powerful in the way you’ve described them to be. You’ve compared a collection of algorithms (neural networks), on the one hand, to a theoretical paradigm describing the computer itself (Turing machines), on the other, and I don’t think the comparison is correct, or even coherent.

      “Neural networks” refers to a body of cutting-edge algorithms which, in some sense, mimic the human brain. These algorithms, though, compile and run on standard computers which are modeled by classical Turing machines. These algorithms are certainly not super-Turing machines in the sense that they can complete “computational feats that Turing machines cannot” — they’re rather algorithms which run on Turing machines.

      Your statement would be better applied to, say, quantum computers, which actually constitute a different theoretical computing paradigm, namely, the quantum Turing machine. It is true that quantum Turing machines can perform computational feats which traditional Turing machines cannot.

      Of course, your initial hypothesis is probably still correct — either because (classical) computers still continue to advance in speed, or because quantum computers might soon become a physical reality.

      As far as supertasks (which it seems like playing Go would not involve), it’s hard for me to see how a physical computer could ever solve one of these. There are surely many infinite sequences whose partial sums converge to a finite number (i.e. the sequence 1/2, 1/4, 1/8…). A computer which could solve task n in (1/2^n) seconds would presumably finish them all after one second. But it seems like no physical computer could keep up once time intervals become very small. But this, of course, is a necessary ingredient in convergence, and it seems like the convergence would fail.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s