Mind of the Machine: AlphaGo and Artificial Intelligence

An Example of a Go board from Go Wiki

I see the great gaps between actual thinking and what AlphaGo is doing much more clearly having watched it play.

Recently, another chapter of man vs. machine played out. Google's Deep Mind project team tried out their state of the art algorithm on the game of Go. The Korean pro, Lee Sedol, a world champion several times over and arguably the best player of the game right now, was its opponent. To put it simply, this was the equivalent of Deep Blue v. Gary Kasparov, and as with the IBM Chess playing machine before it, AlphaGo took home the prize, four wins to one loss.

Go has been thought to be the one game that computers could not beat a human at because a computer could not brute force the move trees. Chess may have an astronomically large set of possible moves, but it is nothing compared to Go. Chess has 12 options for the first move, while Go has 361. However, the feat accomplished by AlphaGo and the Deep Mind team is even more amazing than these raw numbers would suggest.

To continue the comparison, of those 12 opening Chess moves, only six or so would ever seriously be considered by a competitive Chess player. No one wanting to win would open with a Rook or Bishop's pawn, for example. Likewise, in Go, of the 361 opening moves only about 29 are legitimate options and the vast majority are the same moves in reverse. At a glance, looking only at the numbers, that would seem to reduce the processor power needed to accomplish AlphaGo's task, except that poor opening moves in Chess are immediately apparent. In Go, it is not as obvious why good first moves are good, making it much harder for a computer to calculate reasonable responses to them.

Stones, unlike Chess pieces, do not move when placed on the board. A spot is chosen not just because of the rules regarding capturing stones and creating territory, i.e. getting points, but because each stone is an implicit threat or is staking a claim. Go players say a stone has influence and can have influence on other stones without directly touching them or even being near them. Additionally, the type of influence that one stone has on another can change over the course of the game. If one wants to make a computer play at the highest echelons of the game, the question one has to answer is: how do you quantify this influence?

Computers "think" in numbers. Whether it is the physics of the trajectory of a bullet or the height and speed of a jump or even the simple yes/no binary that triggers story progression in a game, every computer game that you've ever played can be reduced to math. Influence isn't so much mathematical, but instead based on a feeling experienced by the player. A move can make a player feel their stones are cramped or a move can make them feel their stones have broken free.

It is rather fascinating to hear some of the AlphaGo project members explain the systems that guide AlphaGo's play in this regard. A lot of the idea behind those systems is based on calculated probabilities learned from previous games, first those played by humans and then those the program has played against itself. This creates a kind of updating heat map of important points around the board. Then, AlphaGo runs through some trees to narrow down moves and testing how those moves might play out. In a way, it sounds like a programmed version of human experience and the human instinct of Go pros.

While there is a certain amount to be said for creating game playing machines for their own sake, really these challenges are attempted in service of another goal. The narrative that underlies building such machines considers if such machines can best us at our own games at the highest levels, then they will soon be able to think like us. It makes for a nice narrative and the premise of more than one science fiction story. However, there are two parts of the entire five game match that I would like to highlight that maybe speak to the mind of the machine.

AlphaGo has already secured victory by sweeping the first three games. In game four, Sedol found himself in a tight spot. Estimates put him behind on territory, so he invaded the center of the board. He had to make those stones in the center live or escape if he wanted to even have a chance at winning. There was little space to make a shape that might allow for his stones survival, and the way out of this gambit required that he play three particular moves. Luckily, two of the moves were forcing moves, i.e. moves where if I play here, you have to respond by playing here. Otherwise, I destroy your position. In this case, the trick was accomplishing the third move. After nearly running out his clock, Sedol began preparing the groundwork for the third move by playing nearby, threatening the position of some other stones. The hope was that this move would lead to a sequence that would end with him playing the third move by changing the state of that group of stones so that it would become a forcing move itself.

AlphaGo responded by ignoring Sedol's move and playing somewhere else. Sedol then responded to protect the group of stones located in this new area from getting cut off, and the game continued along this line for a bit. Then AlphaGo got around to responding to the previous move, now in a slightly weaker position, with an unexpected move. Sedol got what he wanted and with far less pushback than either he or the commentators were expecting given the level of play up to this point. Then, when he went to play the other two forcing moves, AlphaGo didn't respond in the way that was expected and left a hole that Sedol was later able to exploit, destroying its position. The game continued on for a bit, but ended with AlphaGo jumping around different sections of the board making end game moves well before the end game had begun before eventually resigning. You could almost hear the elongated drone of "Daisy...daisy..." emanating from the machine's play.

Game five didn't allow for the especially complicated sequence that Sedol pulled off in game four's center. At least, that's we humans thought. Once both players had moved past its opening and the game's first skirmish got underway, AlphaGo made a huge mistake, a beginner's mistake, in fact, one that caused the loss of about 15 stones and somewhere around an additional 20 point swing in territory.

In Go, there are common recurring shapes that emerge on the board known as tesuji. Go players are trained to recognize them and learn how to react to them. AlphaGo doesn't include these tesujis in its database. A human would recognize such a shape and respond accordingly without having to really work out much in order to fix it or would realize that the shape could be made and avoid creating it in the first place. Because of that, no human player realized that it was too difficult for AlphaGo to determine a response. Funnily enough, this particular tesuji is known as the tombstone.

What do these two stories tell us about AlphaGo? It can't recognize a move with an ulterior motive as an urgent threat. It can't reduce complex patterns into simple shapes via metaphor. Ultimately, they tell us that it is still a computer program dependent on reducing everything to numbers that lacks the cognition to think in certain kinds of abstraction.

There is a reverence for Go that does not quite exist for other board games. Chess may be known as the thinking man's game, even if only subconsciously, but Go is considered an art. In ancient times, alongside calligraphy, painting, and music, Go was considered one of the four arts of a Chinese gentleman-scholar.

Good players look at a move and have a sense of how the other player is thinking if not what the player is thinking. The wide possibility space of the board not only allows, but invites, personalization. In a way, several moves throughout all five games of the match revealed the mind of a machine. These two sequences are just the easiest to explain to those who don't study the deep intricacies of the game. During the match, the commentators could not describe them without ascribing thought or motive to AlphaGo. That's true of every move that it made, but it was the unique ones that caused those watching the game to feel like they were witnessing something uniquely personal from a machine.

We humans like to anthropomorphize. From animals to tools to computers, we like to think of things as something like a human. The drive to create artificial intelligence is the most extreme variant of that behavior. Yet, AlphaGo is not cognitive. It doesn't think. It doesn't get suspicious, confused, worried, desperate, arrogant, complacent, or any of the other behaviors ascribed to the program over the week long match. In reality, winning at a game is just that, a game.

We do use games as short hand to say things about people. We see someone playing Chess, and we think that they are smart and clever. We see them playing Go, and we think that they understand the world more holistically than your average person does. Thanks to machines, we know that these games are no longer the thinking man's game or the philosophical expression of gentleman-scholars. They are just games.

And yet, one could say the machines got smarter, instead, I see the great gaps between actual thinking and what AlphaGo is doing much more clearly having watched it play. When I started playing, I didn't think that I'd see this leap in intelligence in my lifetime, and I am in awe of the accomplishments made to improve artificial intelligence. But, in that space between wonder and the uncanny, games retain their relevance and meaning to human thought. AlphaGo plays a good game of Go, but for now, that is all that it can do.

Pop Ten
Mixed Media
PM Picks

© 1999-2018 All rights reserved.
Popmatters is wholly independently owned and operated.