John Harvard's Journal
Counting the War Dead
How lethal are modern methods of warfare? Political scientists affiliated with the International Peace Research Institute (PRIO)—the Oslo-based organization that produces the most commonly cited estimates of war deaths worldwide, in collaboration with Uppsala University in Sweden—wrote in a 2006 journal article that the second half of the twentieth century was marked by “a historically unprecedented network of peaceful ties among the most powerful states” and “a remarkable decline in the numbers of combat deaths worldwide.” The idea that war doesn’t kill all that many people anymore has gained some currency, but at the Humanitarian Health Conference held in Cambridge in September, Ziad Obermeyer ’01, now in his final year at Harvard Medical School, presented data that challenge the PRIO scientists’ conclusions and suggest their estimates undercount war deaths by about 75 percent.
Obermeyer and his colleagues used data from the 2002 World Health Survey (based on representative samples of the populations of 75 countries), which asked respondents whether any of their siblings had died as the result of war, and if yes, in what year.
Starting with that data, the researchers performed three crucial corrections: in “sampling bias”—related to original family size (as deaths in a given family increased, its probable inclusion in the sample decreased, tending to make the number of overall deaths appear lower); in “recall bias”—the notion that elderly respondents may not remember the total number of their siblings, particularly if some siblings died very young; and in “censoring”—the natural limiting effect of the age range of the respondents and their siblings. (One might expect the survey data to represent quite accurately the deaths of people who were young in 1955—the first year for which the United Nations publishes total mortality estimates by country—because most of their siblings were still alive in 2002. Those who were elderly in 1955, however, were less likely to have surviving siblings who could report on their deaths. Similarly, because the survey did not include children, respondents were unlikely to have very young siblings, and thus the results could be expected to underreport war deaths of the very young, at least for the most recent years.)
Obermeyer suggests that the PRIO estimates, which rely on eyewitness and media reports, also entail serious undercounting because of the difficulties inherent in assembling reliable calculations in a war zone. He notes that his group’s estimates were closer to the PRIO numbers for countries with larger conflicts in their histories—Vietnam, for instance—than for countries where the overall death toll was lower, such as Bosnia-Herzegovina.
An innate limitation of using the World Health Survey data, he acknowledges, is that it yields results mainly for places that have settled down. Most nations facing volatile situations today—the Democratic Republic of Congo, for instance, or Iraq—did not respond to the survey for understandable reasons. After removing those countries that polled selectively rather than nationwide, or skipped the questions about sibling mortality, or had no war deaths to report, the researchers were left with only 13 countries suitable for their analysis.
Nevertheless, if these new methods debunk current notions that conflicts are less bloody in modern times, the effects could reverberate in contexts from public-health planning to providing moral and political justification for warfare. If governments and humanitarian groups begin to assume, as a matter of course, that media estimates of casualties are low, they may respond by amplifying the amount of aid they provide. Furthermore, even long after the fact, mortality estimates can be used in the prosecution of war crimes. And, adds Obermeyer, an accurate reckoning is “important for the historical and cultural record of a country’s history.”
He and his coauthors—Christopher J.L. Murray ’83, M.D. ’91, and Emmanuela Gakidou ’95, Ph.D. ’01, both of the Institute for Health Metrics and Evaluation at the University of Washington, where Obermeyer is currently on a one-year research fellowship—are preparing to submit their findings, which are bound to be controversial, for publication. “Whether the data are irrefutable is not necessarily the point,” says Michael VanRooyen, co-director of the Harvard Humanitarian Initiative, which hosted the conference. The new research “looks at the way we’ve measured mortality in a different light, which is entirely what the field needs.” Without such questioning and dissent, VanRooyen notes, “myths and figures from one bad study get perpetuated into being fact.”