Aritificial Intelligence Ethics: Too Many Paperclips

Post March 4, 2018 4:48 pm

Ethical AI. This has begun to become a buzzword in computer science circles. Big names in technology are throwing their weight behind the idea. There are a number of new research projects devoted to it, including a portion of MIT’s new IQ initiative. It even got a mention by former president Barack Obama.  But it isn’t clear what ethics for AI would look like. Nick Bostrom, in Superintelligence, presents one of the standard versions of this issue:  the paperclip-producing AI told to optimize paperclip production, that goes on to kill humanity in the service of producing as many paperclips as possible. Whether this situation is a likely one is beside the point here. It is enough that it seems sufficiently of concern to warrant preemption. It is a vision that tracks with certain common assumptions about what hard AI would be like, because it is itself a story straight out of dystopian science fiction. And it preserves understandings of technologies today. We expect machines to be good optimizers, we don’t expect them to be naturally masterful ethicists; we expect them to disregard life, not to be deeply caring, or avowedly nonviolent. In some way this makes sense, if one tries to put oneself the ontological perspective of the machine in the thought experiment. What does a paperclip factory know of life and love and guilt and death? There may well be a way to program hard-AI with ethics, but this is a tremendously difficult research problem–and Asimov’s three laws were developed precisely as a narrative device to produce examples of how such a system would continue to break down in practice. Until we figure this out, perhaps we are quite lucky to have an invisible hand, rather than a paperclip AI, orchestrating the world’s economy.

Also luckily, many recognize as Kate Crawford wrote in 2016 that the risks of AI are already here. We don’t need to wait for the genocidal paperclip AI to find our loans denied, our communities targeted, our job applications ignored, based on machine learning and basic artificial intelligence. To reverse the application of a famous phrase: the future is already here, it just isn’t evenly distributed; and its costs disproportionately affect those with the least power and influence. AI’s ethics problem is not one for the future; it is not primarily one of hard-AIs and superintelligent paperclip producers. It is much more of a garden variety problem, techniques already in use that stand to restructure the way we live today. It is not clear that these research efforts in ethical AI have anything to say on this front. This is not the same problem. It is not about building ethical rules into autonomous software as much as it is about building software ethically. Humans are still deeply involved here, in configuring, using, and responding to these systems. The ethics of which we are in great need look less like Asimov’s laws–which require a greater level of comprehension of the world to even operationalize, and therefore are quite rightly ignored by researchers seeking to produce working ethics rulesets–and more like structural attention to bias, as exemplified by e.g. nondiscriminatory hiring practices. We are less in need of programmed ethical rule than human law or procedure. AI needs ethics. But the ethics it needs look very little like the nerdy, sci-fi ethics that get many technical people excited about fundamentally new research. This is the province of quotidian, “low-tech” human solutions:  law, accountability, care work. And these are the very kinds of knowledge that are least legible to logic-based perspectives that appeal to technical laborers, as demonstrated by Diana Forsythe, Alison Adam, and other feminist scholars of AI in the 1990s.

But, you might say, are we not on the verge of producing some autonomous systems that need their own, independent ethical rules, since they will not be subject to human control at all times? I’m working on a paper about this, so look for more in the coming year. (This post itself was just now resurrected from many months in purgatory as a draft due to general exams–I will post here as I rescue other work from the same fate.)

Leave a reply