Go figure


I’m a terrible Go player. Perhaps that’s why I hadn’t quite understood AlphaGo until recently reading more about it in Erik Brynjolfsson and Andrew McAfee’s book Machine, Platform, Crowd.

Machines became better than humans at chess a while back as increasing computing power enabled Deep Blue and the like to calculate possible moves and rank whether they were likely to help the computer win. But there are more possible positions in Go than there are atoms in the universe. In fact there are enough possible positions for there to be a universe of atoms for every atom in the universe (that’s 10 to the power 82 in case you’re wondering).

Even today cracking that by brute force would require more computing power than we have available. But what I hadn’t understood was that the best human Go players don’t know why they’re so good.

“How do the top human Go players navigate this absurd complexity and make smart moves? Nobody knows — not even the players themselves. Go players learn a group of heuristics and tend to follow them. Beyond these rules of thumb, however top players are often at a loss to explain their own strategies. As Michael Redmond, one of the few Westerners to reach the game’s highest rank, explains, “I’ll see and move and be sure it’s the right one, bit won’t be able to tell you exactly how I know. I just see it.”

So programming a computer to play Go is tricky because we don’t know what to teach it. What AlphaGo did was teach itself. Back in the 1950s when the theoretical basis for artificial intelligence was being laid out there were two branches — one was rule-based (a bit like the way adults learn a new language) while the other was essentially statistical (like a child learning to talk by trying things over and over again). What AlphaGo shows is just how powerful this second version of AI has now become.

Leave a Reply

Your email address will not be published. Required fields are marked *