A Few Thoughts on the Recent Google Car “Crash”

Reflections March 4, 2016 5:06 pm

As someone who studies automated vehicles, I have had a number of people send me links about the recently publicized “first” at-fault Google car crash (see this article for example). This is actually a really interesting situation, and presents a good moment to comment on a couple of important issues.

Self-driving Lexus at an intersection

A self-driving Lexus at an intersection. Google publicity photo https://www.google.com/selfdrivingcar/where/.

First: As I wrote about in my M.S. thesis last spring, the notion of responsibility becomes more difficult to pin down in distributed and hybrid systems of humans and machines. Around this time, the frequency of Google car accidents came to light, with the inevitable comparisons to human accident rates (for example, these estimates of 0.3 per 100,000 miles for people and 0.6 per 100,000 for Google’s fleet last year). Google rightly claims to be not “at fault” for these accidents, legally. But it is important to note that if indeed their cars were being crashed into from behind at a higher rate than normal, their entirely law-abiding behavior is not necessarily neutral. And there was then much public discussion about reprogramming the cars to behave “more like humans.” The notion of fault and “at-fault” accidents is a human one, based at least in part on human competencies and capabilities (reaction times, etc). It is not entirely unproblematic to say that a new entrant into a complex system, one that technically follows the rules but does things that are unpredictable to others within that system, is operating appropriately and in no way a locus of “fault.” These knock-on effects already occur, but the artificial nature of the new actors in this space brings them to attention. In this most recent incident we have a system entering a new legal category of fault. But all automated systems have the potential to disrupt the environment around them in ways that are important to consider, even if they do not map to legal definitions.

Second: Setting aside the issue that productively assessing faults, in a systems-engineering sense, is more than simply a matter of legal categories, this recent event points out something else. No device is flawless or bug-free. On the other hand, this is by no means the death knell for anyone’s technology. Accidents like this (and worse) will occur as a matter of statistical certainty, the more automated devices we have on the road—just as human-human accidents should decrease. But the numbers for these categories do not exist yet, while the rhetorics that motivate different portrayals of automated vehicle futures do. This is another encouragement to ask how the rhetorics around these systems operate, how fault-tolerance gets described, and how the public reacts to breakdowns. The official response in this case, that the cars will be reprogrammed to “more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles” (emphasis mine) is an interesting mini-case, suggesting many questions: What does it mean to more “deeply” understand a thing? Who is speaking there? Which actor’s term is that? It seems likely that “deep” would generally be understood differently when used to describe the beliefs of people and those of machines. Understandings and misunderstandings of technology are built through these sorts of rhetorics.

Leave a reply

required

required

optional


Trackbacks