Smiling woman in glasses and a dark outfit seated indoors near a staircase and large windows.

Latanya Sweeney | Photograph by Jim Harrison

When Technology and Society Clash

Latanya Sweeney confronts our all-consuming “technocracy.”

In early 2016, as Donald Trump’s presidential campaign emerged from the Republican pack and Hillary Clinton battled Bernie Sanders through a long Democratic primary season, computer scientist Latanya Sweeney launched a new research project. For more than a decade, information technology had become an increasingly dominant presence in American politics. By that election year, social media, online campaigning, and digital voter records seemed to be everywhere. And so Sweeney, the Paul professor of the practice of government and technology, decided to follow the election more closely than usual. With a group of students, she began studying the places where technological innovation and the electoral process crossed paths. “We wanted to see,” she says, “how technology could make it all go wrong.”

That they did. A few weeks before the March primary in North Carolina, Sweeney and her class found that election board websites were giving incorrect information when voters searched for their local polling places. (She and her students had built a web service to give the correct information to voters everywhere across the country.) In another experiment, they discovered that online voter registration systems in 35 states and the District of Columbia were vulnerable to identity theft: hackers could change voters’ addresses or party affiliation, request absentee ballots, or delete records altogether—all with a few pieces of publicly searchable or easily purchased information.

As the campaign wore on, Sweeney and her students uncovered something else: Twitter accounts run by what she calls “persona bots” purporting to be real people, with plausible-seeming names, photos, and follower counts. There were thousands of these fake accounts, managed in many cases, she says, by “state actors”—foreign governments—and they were feeding propaganda to followers on Twitter, nearly all of whom were human. These bots were an innovation then, and Sweeney’s group was the first to find and describe them. During a public talk last fall at the Harvard Kennedy School (HKS), she recounted the whole story: the bots didn’t spout propaganda all the time; they mostly remained inconspicuous, posting harmless comments, soliciting followers, building trust. “And then all of a sudden,” she said, the bot “would deliver a payload of total disinformation,” a powerful method for disseminating believable falsehoods. “This is the formula for how people turn over their life savings to someone”—or their votes.

Person falling backward surrounded by social media icons, email alerts, and phone notifications, symbolizing digital overload.
Illustration by Taylor Callery

She describes the early encounters with persona bots and the dawning realization of what they were up to as an “uncanny experience.” But for Sweeney, it’s not an uncommon one: for nearly three decades, her consuming occupation has been exposing the unexpected consequences of new information technologies. She helped launch the subfields of data privacy and algorithmic bias. Laws and textbooks have changed in response to her work. So has public sentiment, as Sweeney continues to illuminate the technological threats to privacy, fairness, and, increasingly, to democracy itself.

Now 64, with close-cropped hair and black-rimmed glasses, she exudes a gentle cordiality that somehow feels friendly and formal at the same time. In conversation, she’s witty and unfussy, but also rigorously precise. When she’s at home or in her office, it’s not unusual to see her wearing a Currier House hoodie—since 2016, Sweeney and her spouse, attorney Sylvia Barrett, A.L.B. ’95, have served as faculty deans to the dorm’s 300 undergraduates, living there with their son, Leonard.

A full-time member of Harvard’s faculty since 2011, with joint appointments in the government department and HKS, she is director of both the Data Privacy Lab, which she founded as a young scholar more than 20 years ago, and the Public Interest Technology Lab, which she helped launch in 2021. The two organizations share related missions, both central to Sweeney’s work. The data lab, based at the Institute for Quantitative Social Science, investigates specific problems in information-sharing and techniques for keeping data confidential while preserving its usefulness. The tech lab, housed at the Shorenstein Center on Media, Politics, and Public Policy, has a wider focus; it’s meant to be a kind of “sandbox” of experimentation, where students, researchers, and practitioners can explore technology’s effects on societal issues and design interventions for the problems they uncover.

“We live in a technocracy,” Sweeney often says: today’s technologies are so pervasive and ubiquitous that they have engulfed every corner of American life, from home and work to leisure, purchases, and politics. Now, “Technology design is the new policymaker,” she explains. “Because if the technology doesn’t respect it, or doesn’t allow for it, then having a law”—regarding employment discrimination, for instance, or privacy, or voting rights—“doesn’t matter anymore.” Government policies take years to change, but technological advances happen in months or weeks, a mismatch that grows only more acute. “It’s sort of like we’re in a car, and we’re going for this ride, but nobody’s driving,” Sweeney told attendees at a data science conference a few years ago. “People are taking turns randomly.”

Right now, “Everything is up for grabs.”

“I Could Get a Computer to Do Anything”

If Sweeney is a Cassandra, she’s an optimistic one, with a strikingly durable belief in the benefits that technology can provide. That belief has driven her curiosity and ambition since childhood, and it’s why her discoveries of technology’s dangers are usually followed by tech-based proposals for how to fix them. It’s the reason her signature course—taught every spring and full of hands-on experiments rooted in real problems—is titled “Technology Science to Save the World.” And it’s why her forthcoming book, to be released in 2025 and focused on how society can maintain the benefits of technology without the harms, is titled In Tech We Trust.

This optimism springs in part from her earliest introduction to mathematics. Sweeney grew up in Nashville, Tennessee, raised by her great-grandparents. When she was five years old, her kindergarten teacher called home one day, worried that Sweeney was struggling with basic addition; in school, the child kept giving the wrong answers. So, the family bought a set of flash cards, and Sweeney’s aunt sat down to practice with her. Three plus two? “Six.” Four plus two? “Eight.” Three plus four? “Twelve.” The answers came instinctively, without hesitation. After a few tries, her aunt suddenly realized: Sweeney wasn’t adding the numbers—she was multiplying them.

To this day, her aunt’s astonished expression remains one of Sweeney’s most vivid memories. “Everyone had been trying to figure out what was wrong with me,” she says, but nothing was wrong. From that moment, she was in love with mathematics. “I just found myself diving deeper and deeper into it,” she says. An uncle had graduated from Fisk, a historically black university in Nashville, and Sweeney pored over the textbooks he’d left behind. By seventh grade, she was taking classes at a local community college, studying vector mechanics and geometric proofs.

She loved math because she was good at it, but also because its orderliness offered a refuge from what often felt like a messy home life. Sweeney’s great-grandparents had adopted her when she was a baby, stepping in when her parents couldn’t care for her. The large, extended family was incredibly close, but “I was this oddity, really,” she says. “Nobody else in my entire neighborhood, in my church, in my school, had parents who had ever even been divorced.” Her family dynamic, by comparison, was “complicated.”

She discovered computer science at Dana Hall School, a girls-only college-prep academy in Wellesley, Massachusetts, where she arrived in 1973 as a 14-year-old boarding student on full scholarship, part of the first wave of African Americans to enroll after the civil rights movement. In a computer-programming course, she found a calling. It was everything she loved about math, but better. “Writing a proof on paper—that was nice,” she says. “But a program actually did things in the real world.” She began writing programs for all sorts of random tasks: to conjugate verbs, to coordinate meetings for the administrative office. “There was nothing at Dana Hall that I didn’t write a program for,” she says. It felt like the world was dawning.

By then she had already made a shattering discovery...launching her career as an augur of technology’s unintended consequences.

 

She enrolled at MIT in 1978. She remembers the institute then as very white and male, with a “strong belief in the religion of science.” The latter part seemed promising: a place where ideas would be welcomed and developed. “It was an incredibly exciting time,” she adds. “You could see the revolution coming with personal computers. You knew it was going to change everything.” She wanted to be a part of it: “I lived in the computer lab.” But the welcome on campus didn’t always extend to her. Usually, she was the only woman or African American in the classroom, and she often found her ideas ignored or dismissed. It was hard to find supportive mentors. “You can imagine the situations,” she says. “I had some difficult times there.” Plus, in those early days of computer science, “It didn’t take long before you started knowing more than any instructor,” whose Ph.D.’s were in mathematics and engineering, not computers. In 1981, she dropped out to start her own software consulting company in Kendall Square, which she ran for 10 years (Barrett, whom she’d met while at MIT, was her business partner). “Back then, in the computer business, no two days were ever the same,” she says. “We were doing some amazing things. I felt like I could get a computer to do anything.”

But she kept running into a familiar challenge. Many customers were doctorate-level computer scientists, and among them Sweeney gained a reputation for solving difficult problems and building impossible systems. Yet, there was a disconnect: “I wanted my ideas known,” she says. “But what happens is, you’re this young black girl—so, who are you? I was doing this work and getting paid, but my ideas weren’t getting traction.” In fact, she says, customers in academia sometimes published their own papers about her innovations. “The only way I was going to get the kind of attribution I felt that I deserved,” she realized, “was by having academic credentials.” In 1991, she enrolled in the Harvard Extension School, where she could study while still working. She dove into the curriculum, taking graduate-level classes in computer science—many with senior lecturer Henry Leitner, who became an important mentor—while also studying poetry, philosophy, religion, and language. After earning her bachelor’s degree in 1995, she re-enrolled at MIT, this time as a graduate student. In 2001, she earned a Ph.D. in computer science, the first black woman to do so at the institute.

By then she had already made a shattering discovery with what arguably remains her most famous experiment, launching her career as an augur of technology’s unintended consequences.

“It Was Really a Can of Worms”

Sweeney has come to see the tech revolution’s fallout as a series of “technology-society clashes” that arrive in waves. The first to surface was data privacy. In her famous experiment, conducted in 1997, she pulled the supposedly anonymous medical record of then-governor William Weld ’66, J.D. ’70, from a publicly available dataset of Massachusetts employees’ health information (see “Exposed,” September-October 2009, page 38). Such datasets are invaluable for public health research, so state officials had removed what they believed to be all the identifying information, leaving only a few demographic tidbits: ZIP code, birth date, and gender. Sweeney cross-referenced the data with a voter list she’d purchased from Cambridge City Hall for $25—and to her surprise was quickly able to relink Weld’s name to his medical record. Later, she calculated that 87 percent of the population could be uniquely identified using only those three pieces of information.

Sweeney hadn’t expected to find these problems—she’d originally intended to settle an argument with a medical ethicist, who had warned her that computer records were putting private information at risk. “Computers are evil,” Brandeis University’s Beverly Woodward had told her. “I wanted to correct her thinking,” Sweeney recalls. “But she literally foretold the future.”

The result was an earthquake. Within a month, Sweeney was testifying before Congress; legislators were then drafting the Health Insurance Portability and Accountability Act (HIPAA, the chief safeguard for medical records), and her work was cited in the ensuing regulations. Not long afterward, still in graduate school, she coauthored a paper introducing the concept of “k-anonymity”—a more rigorous form of de-identification—and suggesting approaches for how to achieve it. Officials around the world began updating their policies and best practices for records storage.

“She basically invented the field of data privacy,” says James Waldo, McKay professor of the practice of computer science and professor of the practice of public policy, who has worked and co-taught with Sweeney. “There’s a whole chunk of privacy-related technology that just wouldn’t have happened without Latanya’s work.”

In the years since, Sweeney, her students, and Data Privacy Lab colleagues have continued probing the weaknesses in privacy protection, which seem only to proliferate. They’ve published studies on Huntington’s disease patients in Illinois, hospital discharge records in Washington State, and the Apple Watches and Sleep Number beds that track biometric information like movement, body temperature, and heart rate. In one experiment, Sweeney re-identified anonymous volunteers in the Personal Genome Project, a DNA study led by Winthrop professor of genetics George Church. During public talks, she often displays a data map showing the numerous destinations where a typical patient’s health records might go: not only to hospitals, pharmacies, and insurers, but also to law firms, transport companies, research labs, government agencies, media outlets, and employers. The map looks like a wonky spiderweb, and at least half the data flows aren’t even covered by HIPAA regulations, she says: “Once we started opening up the can of worms, it was really a can of worms.”

The second wave of technology-society clashes appeared not quite a decade later, centered on bias. Sweeney, again, was one of the first researchers to get there. During an interview with then-Reuters reporter Adam Tanner, she Googled her own name in search of a journal article she’d written, and an ad implying she had a criminal past popped up unexpectedly on the screen. “Tell me about the time you were arrested,” Tanner said. But she had never been arrested, and the jarring encounter led the two of them to spend hours typing other names into the search bar. A formal experiment conducted afterward confirmed what Sweeney and Tanner had surmised: in 120,000 lookups, she found that ads for arrest records appeared more often in Google searches of black-sounding first names than of white-sounding names (see “The Watchers,” January-February 2017, page 56). For some names, the ads appeared as often as 80 percent of the time, even when no arrest record existed. Sweeney’s paper on the phenomenon, published in 2013, was the earliest study of algorithmic fairness.

“These are simple experiments,” Sweeney says. “What makes the difference is knowing what to look for, and knowing that you’re looking.”

 

Since then, she and her students and colleagues have conducted numerous other studies of bias driven by algorithms, looking at racially discriminatory Facebook ads, unfair pricing differences in Princeton Review SAT online tutoring, and an Airbnb fee system in which Asian hosts in Oakland and Berkeley were receiving 20 percent less money than white hosts with similar rentals. In 2014, during the year she spent as chief technology officer for the Federal Trade Commission (FTC), Sweeney examined advertising practices on the websites of student groups, including Omega Psi Phi, an African American fraternity, where she found ads for defense attorneys and for the lowest-ranked credit cards. On the websites of other fraternities, credit card ads were promoting the much more valuable American Express Blue.

“Latanya really shepherded and brought to light the importance of looking at these concepts, in a field that’s still very white- and male-dominated,” says Ji Su Yoo ’20, a former Sweeney student and teaching fellow who now studies the intersection of technology and inequality as a doctoral candidate at the UC Berkeley School of Information. The study of algorithmic bias has grown rapidly in recent years, and some of its most prominent researchers are black women who cite Sweeney’s work in their own: Princeton sociologist Ruha Benjamin; UCLA Internet studies scholar Safiya Noble; and MIT Media Lab computer scientist Joy Buolamwini, whose 2018 study of facial recognition software dramatically demonstrated how poorly it performed on dark-skinned faces, especially black women’s. “A lot of folks are standing on the shoulders of these giants,” says Yoo, “and Latanya was paving the way from the get-go. It was always an uphill battle. It still is.”


 


 

“Bridging Different Worlds”

People often ask about Sweeney’s talent for identifying these clashes: why does she perceive problems that aren’t visible to her peers? Partly the answer is obvious; as a woman and an African American, she sees with different eyes. Her life history is a factor, too. In thinking about data privacy, she’s reminded of her great-grandparents, born in 1899 and 1900 in Tennessee, who spent most of their lives under Jim Crow. “My great-grandfather had these rules about ways he had found to survive,” Sweeney told an interviewer in 2021. Most of the rules were geared toward maintaining anonymity in a hostile society. Privacy meant safety, a space in which to function. Democratic values were not an abstraction. “I think about that a lot,” she says now.

But there’s also something holistic and grounded about the way Sweeney looks at systems. She has a knack for noticing what she’s noticing, and for turning observations into experiments. “That’s a function of bridging different worlds,” says computer scientist Kathy Pham, an adjunct public-policy lecturer and faculty member of the Public Interest Tech Lab. “It would be so easy to stick to the world of scholarship, but Latanya pairs together academia and practice in a way that completely changed the world. She doesn’t just talk about voting; she builds technology and organizations to make voting better.” This is something Sweeney’s colleagues remark on often: the pragmatism and elegance of her work, perhaps especially her experimental designs. “That’s not something to be taken for granted, given her training,” says Sharad Goel, a professor of public policy who studies criminal-justice reform and democratic governance through a computational lens. In computer science, “Technical approaches are easy,” he adds. “Thinking through the actual problems, and the solutions that will be effective—that’s not easy. Her goal is not to prove theorems, although she does that, too; it’s to solve problems.”

James Waldo recalls the time a few years ago when Sweeney was teaching a lesson on privacy and surveillance and sent her class to take drone photos of his students, who were outside on a different assignment for a separate course, geotagging surveillance cameras around campus. She wanted her students to test a hypothesis that people will ignore being watched, even when it reaches a level they consider unacceptable. She was right: Waldo’s students were unnerved by the drones but did nothing to stop them. “The real brilliance of Latanya’s research is her ability to come up with experiments that can be used to support or falsify an intuitive claim that she has,” he says. “I almost think of it as being in the tradition of British physicists from the twentieth century, guys like Ernest Rutherford and Arthur Eddington, who would build these elegant but cheap experiments and made huge advances without having to use multimillion-dollar devices.”

In part, Sweeney’s emphasis on simplicity is connected to her emphasis on teaching. That, in turn, connects to her unshakable belief in technology’s potential to help people—and her determination to make it so. The year she spent at the FTC was eye-opening, she says. “It’s like Spider-Man’s den—it was immediately clear that the glimpse of data-privacy issues and the glimpse of algorithmic-fairness issues that I had seen was so small compared to the number that were there. I just had never realized how big these problems really were and how they were reshaping society.” The other thing she saw was how unprepared the FTC was for tech-based threats; it was a brick-and-mortar agency in an increasingly digital world. She returned to Harvard in 2015 intent on building a workforce of technologists. In the government department, Sweeney started an undergraduate tech-science program. “These are simple experiments,” she says of the studies she conducts with students. “What makes the difference is knowing what to look for, and knowing that you’re looking.”

“Misinformation and Disinformation on Steroids”

Lately, the technology-society clashes Sweeney and her students have been documenting feel increasingly existential. Now the danger is to democracy itself, and the waves are coming faster. In the last decade, misinformation and disinformation have emerged as a major threat, accelerated by artificial intelligence (AI). “AI creates misinformation and disinformation on steroids,” she says. “Suddenly, text messaging and phone calls can be done at a scale that we could not have imagined before. And with AI, each message can be personalized.” Meanwhile, online, people can say anything, with almost no transparency at all. “And you end up with these groups [of voters] thinking they know you, but in fact, nobody knows who you really are,” she says. “We’ve really not addressed that.”

Problems like these now consume much of Sweeney’s attention. She and her students have been following the 2024 election very closely. In response to the identity-theft vulnerabilities they uncovered in 2016, she and her class built a tool for Georgia’s 2020 U.S. Senate campaign runoff, which they then expanded nationwide under the name VoteFlare. It monitors the registration details of people who sign up online and notifies them with a “flare” by text, email, or phone if anything changes. In 2024, Sweeney and her students have been using the tool to follow and analyze changes to voter registrations in states across the country.

Another lab project, the Political Messaging Archive, is in the process of creating 63,000 hypothetical personas, two for each ZIP code—one representing a median Democratic voter, the other a median Republican. “And we sign them up for all the things that persona would do,” Sweeney says: online mailing lists, subscriptions, groups. “And then we monitor the information they’re fed on social media, on text messages, and by email,” compiling an archive that will be publicly viewable at politicalmessengingarchive.org. Another tool debuting this fall, called Same Source (samesource.org), will show users the provenance—authentic, modified, or AI-generated—of digital images and videos. With 6,000,000 entries in its library, the program can identify when an image first appeared and how it might have changed. “And you can look up similar images, because what we’ve learned is that you can discern a lot from those,” Sweeney says. “Here’s an image, and all of a sudden you see it morphed, and it takes on a totally different political meaning.” With technology, those changes are trackable.

“So,” she says finally, looking up, smiling, “that’s kind of what we’ve been up to. Those are some of the ways we thought we could help.” Even after the 2024 election passes, new waves of technological clashes will keep crashing ashore, and Sweeney will develop new technological tools to try to keep society afloat. “I have this weird thing,” she says: “Like, it’s not good enough to sit with a problem. I’m driven to the question of, ‘How do we solve it? How do we shore up society’s interest, the underdog’s interest?’” Almost always, she finds the answer where the question began: at her computer.

Associate editor Lydialyle Gibson wrote about the crisis of homelessness in the May-June Harvard Magazine.

Click here for the November-December 2024 issue table of contents

Read more articles by Lydialyle Gibson

You might also like

President Garber’s Quiet Installation

A private ceremony celebrated Garber’s appointment as president.

A Ministry of Presence

Capuchin friars bring food and supplies to Harvard Square’s homeless.

Seeing Methane from Space

How Harvard scientists hope to slow near-term climate change

Most popular

The World’s Costliest Health Care

Administrative costs, greed, overutilization—can these drivers of U.S. medical costs be curbed?

Home Unaffordable Home

America’s housing problem—and what to do about it

The Health Benefits of Owning a Pet

Animal companions help their owners live longer, happier lives.

Explore More From Current Issue

Do Ivy League Athletes Outperform in Careers?

How does undergraduate participation in varsity sports enhance career success?