On the Promise of AlphaGo

Reflections March 15, 2016 6:45 pm

 

A game of Go

A game of Go. Chad Miller (CC BY-SA 2.0)

Games are an old venue for the development of AI programs, and have been part of the AI repertoire since the beginnings of the field. Turing wrote about games in his 1948 paper “Intelligent Machinery,” in which he suggested chess, noughts and crosses, bridge, and poker as among the games to teach machines. Why the focus on games that foreground disembodied mental activity? Because a “thinking machine” constructed in the image of a man would be “a tremendous undertaking” and therefore would be “altogether too slow and impractical” a research project (p. 39). Minksy, in the introduction to the 1968 Semantic Information Processing is more explicit about why games receive so much attention:  not because they are simple, but because they “give us, for the smallest initial structures, the greatest complexity” (p. 12). These are clear value judgments, delineating the lines of what is considered to be valuable and practical research in AI (where to move the pieces, but not how to articulate the hand that moves them). But they are also promissory statements in a broader sense. Making an AI to play bridge was never the point (the market for AI bridge players can never have seemed particularly large!), but the expectation in both Turing’s and Minsky’s accounts is that by building such an AI we will learn useful things about intelligence full stop. And we could then use those to make useful tools in other areas.

Let’s consider AlphaGo‘s spiritual ancestor, IBM’s Deep Blue, and its (contested) 1997 victory over Gary Kasparov. While Deep Blue was certainly an amazing hardware undertaking, one of the most powerful supercomputers of its day, was its victory a groundbreaking event in the history of AI? There are several ways to answer this:  it was surely a PR victory, even if the machine’s strategy depended more on brute force computation of all possible moves than a practice we would recognize as “intelligence.” But it is a worthy question whether Deep Blue’s promise was sustained. If the AI field builds systems to play games because solving the game problem will also solve others—implicitly, to build the chess machine researchers will have to solve all sorts of other problems and figure out all manner of things about how to build intelligent machines that will be useful elsewhere—did this actually happen in this particular case? Or, by doubling down on brute-force search methodologies, did researchers instead sweep away much of the complexity that the promissory notions of Deep Blue suggested it would address? Retrospectively, this seems to me to be the case.

The same question stands for AlphaGo. How much is this an achievement in terms of pure computational power, of the power of sufficiently well-trained statistical learning techniques? Or how much has it broken into new regimes of knowledge that remained closed? There is certainly a company politics that may be involved here, but Yann LeCun’s Facebook post on the matter provides a useful reminder that all has not been fundamentally solved. The promissory dimensions of game-playing AI seem like they may tend to reach beyond what is actually achieved. While it is useful and even intellectually rewarding to build a world-champion chess or Go machine, neither “solves” intelligence alone. Nor will the next milestone in game-based AI. The disjoints between the promised and the realized—and how the promises were constructed to begin with—remain intriguing objects for study.

2 Comments

  • Nice post, Erik. My own take on these things is to let the computer scientists fight it out over what “intelligent” means, rather than try to articulate a definition myself. (The Yann Lecun’s Facebook link is quite good actually). These debates over intelligence are often like the True Scotsman story. You know, the story where one guy tells the other about a crime that a third guy committed. To which the second guy says: No Scotsman would do such a thing. First guy replies that third guy was a Scot. So the second guy says: Well, no real Scotsman would do such a thing. That’s how it goes with intelligence, all the way back to Descartes and Babbage and Turing and Dreyfus and Minsky.

    I also wonder if Watson might be more significant than AlphaGo here – given that its architecture relied so much on using information on the Web to create models. IBM is making a lot of investments into using Watson in a lot of its other projects. Lots of advertisements too. They did this huge 4-page spread in the New York Times a few months back.

  • estayton@mit.edu' Erik Stayton

    Interesting point about Watson, I have my eye on it but haven’t done a lot of deep investigation in that regard. My gut reaction is that it is a very different sort of project, however.

    There is an important politics to not leaving “intelligence” as an actor’s category. Though not explicit above, there is a real sense in which AI sees its projects dogged by the No True Scotsman issue: the actor’s formulation might that “intelligence is what hasn’t been done yet.” One part engineering credo, one part exasperated response to public perceptions.

    But the responsibility for this repeated curse, in my reading, is often placed on philosophers, or the fickle public, or anyone but the researchers themselves. Engineering stuff that performs interesting tasks is different than seeking transcendental truths. But to call your discipline AI, to claim “intelligence” as one’s expert domain, is also to throw down the gauntlet for such challenges. My sense is that neither Deep Blue nor AlphaGo was ever intended to be a Scotsman per se. But they can masquerade as such, which is useful only until it becomes a liability.

    From the field’s perspective, these things are tools toward developing particular kinds of knowledge and skills. As a society, we misunderstand or misidentify that knowledge, and its purpose, at our own peril. The language of both actors and observers shapes the way the research climate proceeds, and I think that space of conflict is therefore ripe for some reclamation and intervention.

Leave a reply

required

required

optional


Trackbacks