Games are an old venue for the development of AI programs, and have been part of the AI repertoire since the beginnings of the field. Turing wrote about games in his 1948 paper “Intelligent Machinery,” in which he suggested chess, noughts and crosses, bridge, and poker as among the games to teach machines. Why the focus on games that foreground disembodied mental activity? Because a “thinking machine” constructed in the image of a man would be “a tremendous undertaking” and therefore would be “altogether too slow and impractical” a research project (p. 39). Minksy, in the introduction to the 1968 Semantic Information Processing is more explicit about why games receive so much attention: not because they are simple, but because they “give us, for the smallest initial structures, the greatest complexity” (p. 12). These are clear value judgments, delineating the lines of what is considered to be valuable and practical research in AI (where to move the pieces, but not how to articulate the hand that moves them). But they are also promissory statements in a broader sense. Making an AI to play bridge was never the point (the market for AI bridge players can never have seemed particularly large!), but the expectation in both Turing’s and Minsky’s accounts is that by building such an AI we will learn useful things about intelligence full stop. And we could then use those to make useful tools in other areas.
Let’s consider AlphaGo‘s spiritual ancestor, IBM’s Deep Blue, and its (contested) 1997 victory over Gary Kasparov. While Deep Blue was certainly an amazing hardware undertaking, one of the most powerful supercomputers of its day, was its victory a groundbreaking event in the history of AI? There are several ways to answer this: it was surely a PR victory, even if the machine’s strategy depended more on brute force computation of all possible moves than a practice we would recognize as “intelligence.” But it is a worthy question whether Deep Blue’s promise was sustained. If the AI field builds systems to play games because solving the game problem will also solve others—implicitly, to build the chess machine researchers will have to solve all sorts of other problems and figure out all manner of things about how to build intelligent machines that will be useful elsewhere—did this actually happen in this particular case? Or, by doubling down on brute-force search methodologies, did researchers instead sweep away much of the complexity that the promissory notions of Deep Blue suggested it would address? Retrospectively, this seems to me to be the case.
The same question stands for AlphaGo. How much is this an achievement in terms of pure computational power, of the power of sufficiently well-trained statistical learning techniques? Or how much has it broken into new regimes of knowledge that remained closed? There is certainly a company politics that may be involved here, but Yann LeCun’s Facebook post on the matter provides a useful reminder that all has not been fundamentally solved. The promissory dimensions of game-playing AI seem like they may tend to reach beyond what is actually achieved. While it is useful and even intellectually rewarding to build a world-champion chess or Go machine, neither “solves” intelligence alone. Nor will the next milestone in game-based AI. The disjoints between the promised and the realized—and how the promises were constructed to begin with—remain intriguing objects for study.