Bias in Artificial Intelligence
One of the more startling and instructive documentaries of the recent past is 2020’s Coded Bias, which explores a thorny dilemma: in modern society, artificial-intelligence systems increasingly govern and surveil people’s lives—algorithms now routinely make decisions about health care, housing, insurance, education, employment, banking, and policing—yet racial and gender biases are deeply embedded in many of these AI systems (for more background, read “Artificial Intelligence and Ethics,” January-February 2019, page 44).
The film, which premiered at Sundance and is now streaming on Netflix, begins with MIT Media Lab researcher and MIT doctoral candidate Joy Buolamwini recounting an experience from her first semester there in 2015: working on an art project that used AI facial-recognition software, she was confused at first when the computer didn’t seem to register her face. (Since then, a growing body of research, including by Latanya Sweeney, Paul professor in the practice of government and technology, has shown that AI facial recognition programs do not accurately see dark-skinned faces.) During a striking moment early in the documentary, Buolamwini, who is African American, demonstrates the problem: holding a white mask over her own face, she turns toward her computer, which trills and lights up in response; when she lowers the mask, the computer sits eerily silent.
The documentary presents a damning portrait of AI’s flaws and the efforts under way to improve them, weaving together research and interviews of those who study the field, including several with Harvard connections: Berkman Klein faculty associate Zeynep Tufekci, former Nieman visiting fellow Amy Webb, data scientist Cathy O’Neil, Ph.D. ’99, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016). Buolamwini herself is a former Adams House tutor (and performed her spoken-word poem, “AI, Ain’t I A Woman?” at a Harvard conference in 2019).
Central to the film’s account of AI’s history is Meredith Broussard ’95, a data journalist and software developer who in 2018 published Artificial Unintelligence: How Computers Misunderstand the World. Broussard, who has called algorithmic bias “the civil rights issue of our time,” is a journalism faculty member at New York University, research director of the NYU Alliance for Public Interest Technology, and an advisory board member of the Center for Critical Race and Digital Studies. Most people don’t realize the “massive datafication of everyday life,” she said during a recent interview, and the extent to which algorithms, often hidden from view, “are being used to make decisions on our behalf. And because most people don’t know this, they don’t understand that there is enormous bias and discrimination inside these automated systems that are being used to judge us.”
One contributing problem, which she examines in Artificial Unintelligence, is what she calls “technochauvinism”—the belief that technology is always the superior solution, which has warped people’s relationship to computers and to each other and the world around them. “Social fairness and mathematical fairness are not the same thing,” Broussard explained. “And mathematical fairness is not always what’s called for….We have to be specific about context. We can’t pretend that it’s going to be adequate to use mathematically fair solutions for social problems. We have to think about what is the right tool for the task.”
Broussard is now working on a second book, about how technology intersects with race, gender, and ability. “I’m writing about machine fairness and the justice system,” she said, and algorithms in policing—an area where racial bias in facial-recognition surveillance tech carries especially pernicious implications. “I’m also looking at AI in cancer diagnosis, and how algorithms are used to assign imaginary grades to real students.” As the mother of a school-aged child, Broussard said, she has “a personal understanding of how my kid is going to be judged by algorithms in the future.”
One essential pathway toward fixing AI’s flaws is to build a more diverse pipeline to tech careers, one that includes women and people of color. Coded Bias makes clear just how white and male the field remains, from its earliest conception among Dartmouth computer scientists in the 1950s to Silicon Valley programmers today. And how that narrow perspective has shaped the industry’s culture, and, by turn, its output. “A really small and homogeneous group of people got to decide what they thought intelligence was, and therefore what computational intelligence was,” Broussard said, “and it turns out they were wrong about it in a lot of ways.”
Broussard herself was once in the pipeline to a tech-industry career: arriving at Harvard College in 1991, she initially studied computer science, one of only six undergraduate women in that concentration. She knew two of the other women; the remaining three, she writes in Artificial Unintelligence, “felt like a rumor.” She never met them. “It was really exciting to be there at that time—I started college right when the Web started—but it was also deeply alienating, because there just weren’t a lot of other people like me.” As a black woman, “I also couldn’t look ahead and see anybody who looked like me or who I wanted to model my career on. You need role models. And there weren’t any.”
Broussard left computer science and graduated with a degree in English, having also taken courses in African American studies. Afterward, she took a job as a software developer at AT&T Bell Labs, building automated testing systems for the telecommunications network and working on AI projects. But eventually, unable to shake the same loneliness and alienation she’d felt at Harvard, she left computer science for journalism.
The patterns that pushed her out of the field persist, with sporadic improvements, especially in education. Women made up 38 percent of the 502 computer-science concentrators at Harvard in 2019-20; and 15 percent were students from underrepresented minorities: black/African American, Hispanic/Latino, or American Indian. Both those figures are above the national average, however, and women’s representation in computer careers has actually decreased in recent decades—from 34 percent in 1990 to 25 percent today, according to analysis from the Pew Research Center. This has happened even as women’s employment in other STEM fields—particularly the life sciences—ticked upward during the same period. Blacks and Hispanics combined make up 15 percent of employees in computer jobs today.
These disparities run parallel, Broussard said, to AI’s broader problems with bias. And it is important and necessary that those problems be solved, however complicated or difficult they are. “Right now, there is a class differential at work,” she explained, citing Cathy O’Neil’s research. “If you’re rich, you still get judged and evaluated by humans; if you’re poor, you get judged by algorithms.” So for the moment, the wealthy don’t experience the problem as sharply. “Rich people tend to think, ‘I’m still fine, my kids are still fine, my family’s fine, so I’m not going to worry about it.’ But the thing is, the algorithms are coming for everyone—rich people, too—and they’re going to be just as unfair. This issue of algorithm bias is one that affects all of us.”