[A modified version of a post that I wrote here on my personal blog.]
In a few weeks, the US Supreme Court will hand down what could possibly be a historic decision in the case King vs. Burwell. The case concerns the Obama Administration’s signature healthcare legislation: the Patient Protection and Affordable Care Act, sometimes just called the ACA or Obamacare (though this term is often used pejoratively by its opponents). The ACA rests on three pillars: regulations about the content and cost of a health insurance plan, the individual mandate that makes it compulsory for every resident to buy health insurance and penalizes those who don’t, and federal subsidies for those who can’t afford to buy health insurance.
King vs. Burwell is about that last part. It asks: are federal subsidies only for insurance exchanges that have been set up by individual states or do they also apply to exchanges that have been set up by the federal government as well? This distinction matters because many Republican states refused to set up exchanges on their own, prompting the federal government to do it for their residents. The text of the law talks about subsidies for the exchanges set up by the “State” without quite specifying what this State is. Does “State” mean the states that constitute the United States or does it mean state, in the abstract, whether it’s the federal government or the states? One would think: how difficult can that be? It seemed very clear in the long battle over the ACA that the subsidies were meant for everyone, irrespective of who set up the particular insurance exchange. And yet, the question was contentious enough that lower courts disagreed with each other and the Supreme Court took it up. With a good chance that they might rule in a way that destroys the very foundations of the ACA.
King vs. Burwell is just one example of a hotly contested, bitterly divisive political battle in which experts—i.e. lawyers and judges—are involved from start to finish. And sometimes, this starts to generate a kind of disillusionment about expertise and its role in a democratic society. (STS, from its inception, has taken a complex, if not ambivalent stance, on the role of expertise in a democracy.) On the other hand, influential scholars like Cass Sunstein—relying often on studies of expertise coming from social psychology—will suggest that the solution is to trust experts more, not less. In the rest of this post, I want to try to reconcile these two stances, STS and social psychology, using King vs. Burwell as my rhetorical foil.
Social psychologists and the public reception of science
The battle over the ACA is just one instance of the numerous bitter political battles fought all over the United States over issues like global warming, pollution and regulation, battles in which scientific and technological experts figure prominently. In recent times, social psychologists, economists and other social scientists have also become interested in the question of why people (or rather, certain people) sometimes don’t accept scientific findings. Why, for instance, do Supreme Court judges disagree with each other so much? After all, in the law, aren’t there correct answers? These people are supposed to be experts after all; if they disagree with each other so much, should we even trust them?
Many of these social scientists have converged towards a concept called motivated reasoning: the idea that our reasoning powers are directed towards particular ends, therefore we tend to pick facts that best fit our needs and motivations. Motivated reasoning is a universal concept: all human beings do it, including experts. The question is often: do experts do it less than others? To be sure, this is clearly not how an STS scholar like Brian Wynne (e.g. 1996, 2002) would explain the disagreements between experts and laypeople. It’s not even the approach taken by Naomi Oreskes or Paul Edwards (2010) who frame their reasons for trusting climate scientists as pragmatic choices, drawing extensively on the history of these communities. At the same time though, “motivated reasoning,” even if couched in the slightly problematic scientistic idiom of social psychology (“bias” etc.) seems like a fairly benign concept to me: it suggests that “data” is always interpreted in the light of previously held beliefs; that facts and values are not easily separable in practice.
Where do STS and social psychology diverge?
Sometimes though, social psychological research does get used for some strange ends. In an article in the Washington Post, journalist Chris Mooney reports on a recent experiment from law professor and psychologist Dan Kahan to prove that experts do not engage in less motivated reasoning than non-experts; his specific point being that judges are more than just politicians in robes. Kahan and his collaborators asked the subjects in their pool (judges, lawyers, lawyers-in-training and laypeople, statistically representative of Americans) to interpret whether a given law was applicable to a hypothetical incident; the question was: would they apply the rules of statutory interpretation correctly? So first, they were informed about the law that bans littering in a wildlife preserve. Next they were told that a group of people had left litter behind, in this case, reusable water containers. But there was a catch: some were told that these were left behind by immigration workers helping illegal immigrants cross over the US-Mexico border safely. Others were told that the litter belonged to a construction crew building a border fence. All were polled to understand their political and ideological affiliations.
Predictably, depending on their ideological beliefs, people came down on different sides of the issue: Republicans tended to be more forgiving of the construction workers, etc. What was different was that judges and trained lawyers tended, more than laypeople, to avoid this bias. They interpreted the law correctly (the correct answer here was that it didn’t constitute littering because the water bottles were reusable) despite their ideological convictions. Well, so far so good. I interpret the experiment to be saying that lawyers are subjected to special institutional training, unlike the rest of us, and that this habitus lets them reach the “correct” result far more frequently than us. Experts are different from the rest of us, in some way.
But what’s interesting is the conclusion that Mooney (with some warrant from Kahan) draws from this experiment: that while experts are biased, they are less biased than laypeople, and that therefore, experts should be trusted more often than not. (Note the analytical move here as well: generalizing from lawyers to climate scientists, a move that fits within the parameters of social psychology but not so much with STS.) Well. Scientific American’s John Horgan has a pragmatic take on this: that this, of course, leaves open the question of which experts to trust, especially when they disagree. And besides, to trust experts because they are experts, seems, well, against the spirit of a democratic society.
Reconciling STS and social psychology
Should we then dismiss what Mooney is saying? I think there’s something here about the particular result that Mooney is using here to make his point. And here we circle back to my opening: King vs. Burwell, the would-be historic case currently gestating with the US Supreme Court. This case is precisely about statutory interpretation (or perhaps, it is about the philosophy of statutory interpretation). And oral arguments at least show that the Supreme Court justices are thoroughly divided (John Roberts’ silence and Anthony Kennedy’s solicitousness for states’ rights gave liberals some hope.)
How might one reconcile the findings of the Kahan study with what’s happening with the Supreme Court? The Supreme Court judges are certainly experts: elite, well-trained, and at the top of their respective games. And yet, here they are, right at the center of a storm over what journalist David Leonhardt has called the “federal government’s biggest attack on inequality.” I think there’s a way. Experts are conditioned to think in certain ways, by virtue of their institutional training and practice, and when stakes are fairly low, they do. But once stakes are high enough, things change. What might seem like a fairly regular problem in ordinary times, a mere question of technicality, may not look like one in times of crisis. At this point, regular expert-thinking breaks down and things become a little more contested.
But does that mean that judges are just politicians in robes? (Which is the thesis that Dan Kahan set out to debunk.) Not really. The US Supreme Court actually resolves many many cases with fairly clear majorities; more than a third of them through unanimous decisions. These cases hardly ever make it into the public eye, and they involve, to us at least, what seem like arcane questions of regulation and jurisdiction. Another way to interpret this is to say that these cases are “technical” because they are not in the public eye, no great stakes attach to these decisions, unless it’s to the parties in question. When stakes are high — Obamacare, campaign finance funding, abortion, gay marriage — the Supreme Court, just like the rest of the country, is hopelessly polarized. And a good thing too because fundamental crises in values are best addressed through Politics (with a capital P) rather than leaving them to bodies of experts.
Social psychological experiments on political bias in the “public understanding of science” need to be understood not as grand truths about how people interpret, but as historically contingent findings. Yes, judges will vote more “correctly” than laypeople, but a toy case presented as part of a social psychological study is not the same as an actual case. Real cases have audiences, who construct their meanings, along with the judges and the lawyers. When the first original challenge to Obamacare was floated (about the constitutionality of the mandate), it seemed like there was no way the case could ever even get to the Supreme Court. But it did, and the individual mandate just about squeaked through. King Vs. Burwell seems, on the face of it, even shakier on legal grounds, but no one is underestimating it anymore. Experts are as susceptible to crisis when stakes are high and questions of fundamental values are at stake.
In the article, Mooney takes a somewhat gratuitous swipe at STS as the discipline that is interested in “undermining the concept of scientific expertise.” But that seems to misunderstand the point of science studies. These studies weren’t meant to show that experts are “biased.” They were meant to show that that expert practices and discourse are designed to construct certain questions as “technical.” This is not necessarily a bad thing, but it does, at certain points, drown out other voices who disagree with the experts’ conclusion (again, this is true of all political choices, usually). What is more, once experts are framed as objective and arguing only from facts rather than values, opposing voices, which have no recourse to the language of facts, get delegitimized even further. STS recommendations were not that you need to trust experts more or less, but that public debates could not be satisfactorily resolved by a division of labor where one group of people (experts) were entrusted with matters of fact, and everyone else, to matters of value; that rather, values and facts needed to be debated together by both experts and laypeople (Jasanoff 2003).
Edwards, Paul N. A Vast Machine: Computer models, climate data, and the politics of global warming. MIT Press, 2010.
Jasanoff, Sheila. “Breaking the waves in science studies: comment on hm collins and robert evans,’the third wave of science studies’.” Social studies of science (2003): 389-400.
Wynne, Brian. “Seasick on the third wave? Subverting the hegemony of propositionalism: Response to Collins & Evans (2002).” Social studies of science (2003): 401-417.
Wynne, Brian. 1996. “May the sheep safely graze? A reflexive view of the expert/lay knowledge divide.” In Risk, Environment and Modernity: Towards a New Ecology, ed. S. Lash, B. Szerzynski, and B. Wynne. London: Sage. pp. 45-83.