Voting rights, according to Harvard Kennedy School assistant professor of public policy Maya Sen, are fundamentally a question of numbers: How many people were eligible to vote? What number actually registered? And who, among those who registered, ended up casting a ballot?
Though this year marks the fiftieth anniversary of the Voting Rights Act of 1965 (VRA), the celebration is somewhat subdued for many: in the 2013 decision Shelby County v. Holder, the U.S. Supreme Court struck down a key part of the VRA. Using data to argue for what the act had already achieved, Chief Justice John Roberts ’76, J.D. ’79, writing for the majority, invalidated a portion of the law that used a formula based on historical voting patterns to determine which counties and states needed to be monitored more closely. “All of these questions”—of the history, efficacy, and continued necessity of the Voting Rights Act—“turn on data collection and analysis,” Sen explained at a Thursday event hosted by the Kennedy School’s Ash Center for Democratic Governance and Innovation. At the event, part of the center’s Challenges to Democracy series, Sen spoke with two fellow political scientists—professor of government Stephen Ansolabehere, and Indiana University assistant professor Bernard Fraga, Ph.D. ’13—and New York Times data journalist Nate Cohn.
The four number-crunchers talked through the important role that statistics have played throughout the last five decades of voting rights research and policy. To begin, Fraga pointed out how statistics were embedded in the original language of the act. Section 4, the portion that the high court narrowly struck down two years ago, determined which jurisdictions would be placed under extra scrutiny based in part on a simple formula: if less than 50 percent of otherwise eligible voters were registered on November 1, 1964, or if less than 50 percent of eligible voters ended up casting their ballot in that year’s election. As the panelists explained, empirical analysis was also key to Justice Roberts’s argument that the VRA needed a new formula, based less on the way America looked five decades ago, to determine future oversight. Sen shared some of Roberts’s most striking numbers: before the Voting Rights Act, just 6 percent of otherwise eligible African Americans in Mississippi were registered to vote; in 2004, 76 percent were. The VRA, some said, had done its job.
But the Shelby County example also points to the struggle of getting accurate numbers on these complicated topics. Panelists discussed a statistic that Justice Roberts cited during the case’s oral arguments, which implied that black voter turnout was lower in Massachusetts than in Mississippi, a state still under the scrutiny of the Voting Rights Act. The problem is that only a few states, largely in the Deep South, ask registering voters to identify their race and ethnicity. To compare Massachusetts and Mississippi voting patterns in the way Roberts did requires researchers to rely on less-direct data, such as the U.S. Census’s Current Population Survey, which Cohn called a “deeply flawed survey for the purposes of measuring voter turnout.” Researchers have found that it tends to overestimate non-white voter turnout—a big problem in comparing Massachusetts, which is 8 percent black, and Mississippi, which is 37 percent black. Though this survey might be the best we have at the moment, Fraga added, “The point is: is it good enough?”
Looking beyond the Shelby County case, Thursday’s speakers discussed the way data are likely to change voting-rights debates going forward. First, statistical analysis can help researchers understand the potential effects of new election procedures such as voter ID cards and restrictions on early voting. Cohn ran through some of the numbers on the racial and political breakdown of those voters who officials estimated could not have been “matched” with an official ID source in North Carolina, had that state’s voter ID law been in effect before the 2014 election. Though he estimated that the shift in vote totals from Democrats to Republicans would be only about half a point, he pointed out that the state’s 2014 Senate race was, in fact, that close. “That’s a discernible impact,” he acknowledged, but added, “I don’t think we’re going back to Jim Crow, either.”
Panelists also discussed ongoing research into other areas of voting rights, including redistricting and studies of “racial polarization”—the extent to which different races vote, as cohesive blocks, for different candidates. (Thursday’s speakers, along with a dozen more experts in the field, shared some of their more specific research on these questions at a Friday symposium.) During the last decade, panelists explained, increased analytical capacity has revolutionized how scholars, plaintiffs, and government officials can analyze the big questions in voting-rights cases. “The amount of data, the availability of data, the opennness and transparency of data,” Ansolabehere said, “has made the conversation more coherent.” Researchers can now run thousands of simulations to show how different district lines will change minority representation—and then simply adjust those models to predict what will happen when, for example, whites are no longer the majority in Texas, or across the country. Still, many of the most important questions about representation and voting can be difficult puzzles for political scientists to solve. Because voting is private, and because most states don’t collect data on registered voters’ race and ethnicity, scholars have to match up disconnected data sets when they want to approximate turnout and voting patterns by race. Complicated questions about voting rights demand detailed information. For Sen, the answer is clear: “We still need more data.”