Joshua Greene studies the scientific basis for moral decision-making

Brains scans reveal that In moral decision-making, people rely on emotion to guide choices in some situations and rationality in others.

Philosophers have long debated the foundations of moral decision-making. “Rationalists” from Socrates to Immanuel Kant argued that people should rely on intellect when distinguishing right from wrong. “Sentimentalists” like David Hume believed the opposite: emotions such as empathy should guide moral decisions.

Now Hazel associate professor of the social sciences Joshua Greene, a philosopher, experimental psychologist, and neuroscientist, is trying to resolve this dispute by combining brain-scanning technology with classic experiments from moral psychology to provide a new look at how rationality and emotion influence moral choices. His work has led him to conclude that “emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.”

Greene’s “dual-process theory” of moral decision-making posits that rationality and emotion are recruited according to the circumstances, with each offering its own advantages and disadvantages. He likens the moral brain to a camera that comes with manufactured presets, such as “portrait” or “landscape,” along with a manual mode that requires photographers to make adjustments on their own. Emotional responses, which are influenced by humans’ biological makeup and social experiences, are like the presets: fast and efficient, but also mindless and inflexible. Rationality is like manual mode: adaptable to all kinds of unique scenarios, but time-consuming and cumbersome.

“The nice thing about the overall design of the camera is that it gives you the best of both worlds: efficiency in point-and-shoot mechanisms and flexibility in manual mode,” Greene explains. “The trick is to know when to point and shoot and when to use manual mode. I think that this basic design is really the design of the human brain.”

Unlike earlier philosophers, he can test his theories with neuroscientific instruments. His primary tool is functional magnetic resonance imaging (fMRI), which takes advantage of the fact that many mental functions are localized to specific areas of the brain. Deliberative reasoning, for instance, is housed in the prefrontal cortex, whereas the amygdala is considered the seat of the emotions. By monitoring blood flow to these areas, fMRI allows Greene and his colleagues to observe exactly when someone is relying on “manual mode” or “automatic settings.”

For one experiment (published in Neuron in 2004), Greene asked his subjects how they would respond to a moral dilemma known as “the trolley problem,” which involves pushing an innocent stranger in front of a speeding trolley in order to save five other strangers from being killed. Despite the utilitarian value of killing a single stranger, most respondents said that doing so would be morally wrong: the thought of pushing an innocent person to his death was too much. Yet a handful of subjects said they would end the stranger’s life in order to rescue the others, and Greene found that this group exhibited increased activity in the dorsolateral prefrontal cortex, a brain region he calls “the heart of manual [i.e., rational] mode.”

More recently, though, Greene’s research has led to a slight alteration in his camera analogy. In a series of experiments published in Neuron in 2010, he used fMRI to further explore the interface between rationality and emotion. Again he scanned the brains of subjects responding to the trolley problem, but this time he repeatedly altered the number of lives at stake and the likelihood that the victims could be saved. With 40 lives on the line and a 95 percent chance of their deaths without intervention, for example, nearly every test subject was willing to sacrifice one life to save the rest. But with 15 people at risk and a 50-50 chance for their survival, Greene reports, “[R]espondents were split down the middle as to whether they would intervene.”

As his subjects considered these variations, they all showed increased activity in brain areas that assign emotional value to items like food and money (the ventral striatum and the insula) and also in a region thought to integrate different approaches to decision-making (the ventromedial prefrontal cortex). These systems are evolutionarily older than the rational brain. Moral decision-making, Greene believes, “involves a whole lot of systems in the brain that are not specifically devoted” to that task alone; his results illustrate that even when humans are considering hypothetical moral scenarios or calculating abstract probabilities, they rely to some extent on emotions for guidance.

Thus rationality, unlike “manual mode” on a camera, cannot function independently of emotion, even in people who tend to be more rational—or utilitarian—decision-makers. “Reason by itself doesn’t have any ends, or goals,” Greene says. “It can tell you what will happen if you do this or that, and whether or not A and B are consistent with each other. But it can’t make the decision for you.”

Yet even though emotions will probably always affect people’s decisions, Greene thinks their input can—and should—be minimized in certain scenarios. By learning more about the neurological mechanisms of moral decision-making, he hopes that people may one day improve the judgments they make. “I think that we are too willing to rely on our automatic settings,” he says. “Our [emotions] are there for a reason and they do a lot of good, but they also get us into trouble in situations that they weren’t designed for.”

Read more articles by Peter Saalfield

You might also like

Harvard Economist Nicole Maestas on Aging and Health Policy

The Harvard health economist not afraid to get in the weeds

The Harvard Professor Who Quantified Democracy

Erica Chenoweth’s data shows how—and when—authoritarians fall.

Matt Levine's Bloomberg Finance Column Makes Money Funny

Matt Levine’s spunky Bloomberg column

Most popular

The Latest In Harvard’s Fight with the Trump Administration

Back-and-forth reports on settlement talks, new accusations from the government, and a reshuffling of two federal compliance offices

Eat Your Potatoes Mashed, Boiled or Baked, but Hold the Fries

Baked, boiled, and mashed potatoes are better.

How AI Could Be Raising Your Energy Bill

Utilities shift AI infrastructure costs onto consumers.

Explore More From Current Issue

Grid of headshots showing newly elected Harvard Overseers and Directors, with names and titles listed below each photo.

New Harvard Overseers and HAA Directors

Alumni showed increased interest in this year’s elections.

Illustration of Donald Trump and Alan Garber wearing boxing gloves, facing off beneath the quote: “The stakes are so high that we have no choice.”

Garber, Trump, and the Fight for Harvard’s Future

Introducing a guide to the issues, players, and stakes.

Alexander Gardner’s 1868 photo shows federal peace commissioners with Sophie Mousseau, the lone woman at center.

Harvard Summer Reading Picks | 2025

The wealth gap, shamanism, the life of David Nathan, and more