In April, I found myself in Penn Law surrounded by lawyers berating lawyers. The berating group sighed and shook their heads as they considered other lawyers’ resistance to their cause — the cause of using evolutionary biology in legal practice, based on the belief that only science can truly explain why people do the things they do.
These lawyers belong to the Vanderbilt University-based Society for Evolutionary Analysis in Law (SEAL), who gathered at Penn for their latest annual conference. I attended the first half of the conference as a student blogger for Penn’s Center for Neuroscience and Society, which co-sponsored the event. I never blogged about it, because I walked away from the one day I attended with mixed impressions. (I guess that’s what happens when you listen to people in suits casually debate the existence of free will.)
But this week I read an article in The American that reminded me of the SEAL conference by linking neuroscience with views on right and wrong. Reflecting on the article and the conference, I’m reminded of the problem of confusing explanations with justifications. Before we get to that problem, though, some context on SEAL and moral behavior research might help.
For one thing, I don’t think SEAL’s mission is obvious. Discussing the ways scientific findings can and should be used in law is fair enough. But how can evolutionary theory apply directly to the world of settlements, judges and juries?
The reasoning there turns out to be straightforward, if not foolproof. Settling legal questions involves judgment of human behaviors and motivations — for example, determining whether someone did wrong, and if he did, whether he really meant to, and so on. These behaviors and motivations reflect activity in the brain, which has been shaped by evolution. Putting those pieces together, you might conclude that we must look to the evolution of behavior, via the brain, to understand behavior and then judge it in a legal setting. Voilà — the argument for hardcore evolutionary analysis in law is born.
To some, that argument has only been bolstered by the growth of scientific research on morality in recent years. There’s an entire neuroscience literature looking at the brain circuits involved in our moral reactions and decisions. There’s also a moral psychology literature on patterns in our behavioral responses to moral dilemmas. Research in both fields suggests that humans are wired with certain moral biases, refuting the idea that morals just come from socialization. In fact, some of the hottest research claims to have found moral preferences in babies, based on experiments with Good and Evil Puppets. All this work points to some kind of innate human morality, with implications that could affect the way we judge moral and thus sometimes legal responsibility.
Now, there’s nothing wrong with studying how people make moral judgments. The problem is when this notion of innate morality is co-opted to suggest that the way something is, is the way it should be — to say that is = ought. This “is-ought problem” was most famously laid out by 18th century Scottish philosopher David Hume. Hume called out other moral philosophers for confusing what is with what ought to be, and pointed out the circularity of that reasoning. It’s the ultimate “because I said so!” argument: The decisions humans make must be moral…because we make them and think we can explain them.
That reasoning might seem so clearly flawed that few people follow it. But we fall back on arguments like that every day, like when we judge the “wrongness” of something by how much it disgusts us. (Would you be more outraged seeing someone kicking a cute puppy, or kicking a mean but helpless old man? If you said puppy, does that really mean it’s worse to kick the puppy? I don’t think so.) The tone of is = ought thinking is also present when we take comfort in explanations. Something in the world, the “is,” becomes acceptable, or turns into an “ought,” once we think we can explain it.
Science can be easily exploited for is = ought thinking, because its very nature is to test and offer explanations. That’s the point raised toward the end of the article I read in The American. I generally don’t side with a publication that once called income inequality a “myth.” I’m also no fan of the piece’s moral generalizations, nor the philosopher it focuses on, James Q. Wilson. That said, I actually agree with its gist: scientific explanations for moral attitudes tell us nothing about how we should act. Yet lots of us treat those explanations as some kind of holy grail that tells us everything we need to know, because of a tendency to privilege scientific explanations over, say, social or economic ones.
That tendency nagged at me while I sat through SEAL. The conference itself just consisted of carefully thought-out presentations and discussions. And some attendees seemed happy with a measured discussion of where science can contribute to law. But one side of SEAL represents a movement to bring evolution into all fields that look critically at human behavior — see this publication edited by one of the SEAL conference speakers.
I think such a movement reflects another sneaky confusion of “is” with “ought.” Evolutionary theory and scientific research may illustrate how people work and, from a big-picture view, why we got to be that way. In other words, they can help us understand what “is” in our world. But assuming that science should direct our policies toward our own behavior — i.e., suggesting that lawyers are ignorant if they want to do their thing without fussing over neuroscience or genes — belittles the “ought” side of things. Notions of what “ought” to be come from human judgments on right and wrong, good and bad. Flawed as those judgments can be, we make (and re-make) valuable ones daily without cracking open books on Darwin. And many of the most agonizing questions about human behavior — including why people harm each other and how to respond to that harm — are ones that neither science nor any other approach can outright solve.
Explanations can be great. Let’s just not oversell them, especially if it means exploiting or overextending science.