The Biology of Right and Wrong
Philosophers have long debated the foundations of moral decision-making. “Rationalists” from Socrates to Immanuel Kant argued that people should rely on intellect when distinguishing right from wrong. “Sentimentalists” like David Hume believed the opposite: emotions such as empathy should guide moral decisions.
Now Hazel associate professor of the social sciences Joshua Greene, a philosopher, experimental psychologist, and neuroscientist, is trying to resolve this dispute by combining brain-scanning technology with classic experiments from moral psychology to provide a new look at how rationality and emotion influence moral choices. His work has led him to conclude that “emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.”
Greene’s “dual-process theory” of moral decision-making posits that rationality and emotion are recruited according to the circumstances, with each offering its own advantages and disadvantages. He likens the moral brain to a camera that comes with manufactured presets, such as “portrait” or “landscape,” along with a manual mode that requires photographers to make adjustments on their own. Emotional responses, which are influenced by humans’ biological makeup and social experiences, are like the presets: fast and efficient, but also mindless and inflexible. Rationality is like manual mode: adaptable to all kinds of unique scenarios, but time-consuming and cumbersome.
“The nice thing about the overall design of the camera is that it gives you the best of both worlds: efficiency in point-and-shoot mechanisms and flexibility in manual mode,” Greene explains. “The trick is to know when to point and shoot and when to use manual mode. I think that this basic design is really the design of the human brain.”
Unlike earlier philosophers, he can test his theories with neuroscientific instruments. His primary tool is functional magnetic resonance imaging (fMRI), which takes advantage of the fact that many mental functions are localized to specific areas of the brain. Deliberative reasoning, for instance, is housed in the prefrontal cortex, whereas the amygdala is considered the seat of the emotions. By monitoring blood flow to these areas, fMRI allows Greene and his colleagues to observe exactly when someone is relying on “manual mode” or “automatic settings.”
For one experiment (published in Neuron in 2004), Greene asked his subjects how they would respond to a moral dilemma known as “the trolley problem,” which involves pushing an innocent stranger in front of a speeding trolley in order to save five other strangers from being killed. Despite the utilitarian value of killing a single stranger, most respondents said that doing so would be morally wrong: the thought of pushing an innocent person to his death was too much. Yet a handful of subjects said they would end the stranger’s life in order to rescue the others, and Greene found that this group exhibited increased activity in the dorsolateral prefrontal cortex, a brain region he calls “the heart of manual [i.e., rational] mode.”
More recently, though, Greene’s research has led to a slight alteration in his camera analogy. In a series of experiments published in Neuron in 2010, he used fMRI to further explore the interface between rationality and emotion. Again he scanned the brains of subjects responding to the trolley problem, but this time he repeatedly altered the number of lives at stake and the likelihood that the victims could be saved. With 40 lives on the line and a 95 percent chance of their deaths without intervention, for example, nearly every test subject was willing to sacrifice one life to save the rest. But with 15 people at risk and a 50-50 chance for their survival, Greene reports, “[R]espondents were split down the middle as to whether they would intervene.”
As his subjects considered these variations, they all showed increased activity in brain areas that assign emotional value to items like food and money (the ventral striatum and the insula) and also in a region thought to integrate different approaches to decision-making (the ventromedial prefrontal cortex). These systems are evolutionarily older than the rational brain. Moral decision-making, Greene believes, “involves a whole lot of systems in the brain that are not specifically devoted” to that task alone; his results illustrate that even when humans are considering hypothetical moral scenarios or calculating abstract probabilities, they rely to some extent on emotions for guidance.
Thus rationality, unlike “manual mode” on a camera, cannot function independently of emotion, even in people who tend to be more rational—or utilitarian—decision-makers. “Reason by itself doesn’t have any ends, or goals,” Greene says. “It can tell you what will happen if you do this or that, and whether or not A and B are consistent with each other. But it can’t make the decision for you.”
Yet even though emotions will probably always affect people’s decisions, Greene thinks their input can—and should—be minimized in certain scenarios. By learning more about the neurological mechanisms of moral decision-making, he hopes that people may one day improve the judgments they make. “I think that we are too willing to rely on our automatic settings,” he says. “Our [emotions] are there for a reason and they do a lot of good, but they also get us into trouble in situations that they weren’t designed for.”