Most people can be intimately known, near instantly, in ways unimaginable just a few years ago—through the large-scale collection of data from wireless devices they keep close. “These are devices that we think of as extensions of our powers and capabilities,” said Carr Center for Human Rights director Mathias Risse at a Harvard Kennedy School (HKS) forum on September 20. “If you are anything like me,” he continued, “these devices are the first thing we touch in the morning and the last thing we touch at night. And all the wireless data emanating from these devices is processed for commercial purposes.”
Absent rules against the collection of such private data, its exploitation has made a few individuals and corporations fabulously wealthy, in a process aptly described by Harvard Business School professor emerita Shoshana Zuboff as “surveillance capitalism.” Risse, who will co-direct with Zuboff a new Carr Center program, “Surveillance Capitalism or Democracy?,” said that “the future of our living arrangements is shaped by companies that are not in business to advance visions of the common good, but that are in business—as one is in business—to make profits.” That this is among the reasons, he continued, these topics have become human rights issues with a global dimension.
Risse’s remarks—made just one day after the release of a U.S. Federal Trade Commission study documenting “vast surveillance” of social media users—were the prelude to a discussion among four “immensely influential women” who have been fighting to reform for-profit information collection and exploitation: Zuboff; executive vice president of the European Commission for a Europe Fit for the Digital Age Margrethe Vestager (the outgoing European Union antitrust chief who recently won a European Court of Justice antitrust ruling against Google, and a tax judgment against Apple); Maria Ressa LL.D. ’24, the Nobel Prize-winning journalist who spoke at Harvard’s Commencement in June 2024; and Baroness Beeban Kidron, a peer in the United Kingdom’s House of Lords who has fought to protect children’s rights in the digital environment.
“Just Keep the Market Open”
Vestager said that the most important message to convey is that “it’s not too late” to stop the exploitation of personal data. She credited Zuboff with providing a vocabulary for and an understanding of how personal information is being used commercially, and said that once you see that, figuring out how to respond is something that “can be navigated.” However, she emphasized, it “cannot be done without systemic responses.” She outlined the comprehensive nature of legislation and enforcement in the EU, which, as a first step passed privacy legislation “to make the very simple things obvious,” such as, “You own your data. You should be the one to decide” what happens to it. More recently, the EU adopted the Digital Services Act, which states that democracies can decide what should be considered illegal online just as they do in the physical world, and that online services should be safe for mental health.
Another new piece of enacted legislation Vestager said, is the Digital Markets Act, which prevents large digital platforms (“gatekeepers”) providing services to business users and customers from using their market position to gain an undue advantage. “Very simple idea,” Vestager said, “just keep the market open, keep it contestable, so that we have choice, because that is the first step of being able to take action—that you have choice.” For business owners who depend on these platforms to serve customers, their ability to succeed should “depend on [their] idea, [their] work ethic, the people that [they] have on board, the capital [they] can raise, not on some gigantic company who holds market power.”
Last, she pointed to the AI Act (not yet fully enforced), “to make sure that when artificial intelligence is being used in situations that are crucial for us as individuals, that we are not being discriminated [against]; that AI still serves us as human beings.” All these laws flow from a simple idea that is nevertheless difficult to instantiate: that “technology should serve people.”
A “Hierarchy of Harm”
Kidron, who has used laws, regulations, and international treaties to protect the rights of children online and shield them from surveillance capitalism, said she worries about patchwork legislation. A law to deal with targeted advertising, another to deal with child sexual abuse—while allowing “the system to eat its way into every part of public and personal life”—risks creating what she called a “hierarchy of harm.” That is “not how we have to approach it,” she continued. “We need a bolder legislative approach…we need lawmakers to imagine the world that we want to live in and work out how technology is going to help us live in this world, not try to mitigate a couple of harms on the top of the cake.” And then the rules must be “routinely and ruthlessly applied to the digital world.”
Ressa, the journalist, added that in addition to ending “surveillance for profit,” laws should prevent code bias: the tendency of algorithms to discriminate against certain groups. And she reminded everyone that journalism, that antidote to tyranny, is itself at risk. Democracy, she said, needs need a new system for “stopping the corruption of our public information ecosystem. We need new systems of governance, because the old power structures have been turned upside down; and we need new systems of civic engagement.”
“Total Information Awareness”
How the United States missed the early opportunity to enact federal privacy legislation was partly deliberate and partly an accident of history, as Zuboff explained. She argued that President Bill Clinton and Vice President Al Gore ’69, LL. D. ’94, sought to remove barriers to internet commerce in its heady early days, as they emphasized that the private sector must lead development of cyberspace. But there was thoughtful pushback in Washington, she relates, and an attempt to outline comprehensive federal privacy legislation.
“A couple of months later,” she continued, “something very big happened: “9/11.” Instantly, the conversation was no longer about privacy but rather “total information awareness. And suddenly, these fledgling companies with their web bugs and their cookies and their monitoring techniques and their tracking,” said Zuboff, “they became little heroes, and the new thing was, get out of the way, let them go. Because if you’re an intelligence agency, it’s against the Constitution for you to monitor a civilian population and surveil them. But can you put a big straw out across the country and suck up everything that’s happening in Silicon Valley? You betcha, and you get around the whole thing that way, and that is how [the current system] became institutionalized.”
Zuboff also drew parallels between the early days of exploitation of personal information and the use of copyrighted data— including books, newspaper content, music, artist’s voices—to train contemporary AI systems. She called that stealing. “And stealing,” she added, “is a crime.”
“Who Do We Want to Be?”
As the conversation among these influential thinkers wound down, a question from the audience inadvertently underlined the stark difference in national responses to the rise of an information civilization—a world in which individuals barely exist unless they have a digital presence. The questioner asked Vestager if the legislation in Europe stifles AI: “We see no big AI companies in Europe anymore,” the HKS public policy student pointed out.
For Vestager, this pointed to a fundamental question: “Who do we want to be?” she asked. “I think Europeans would be very poor Chinese…” and “not good Americans either.” The European model, she explained, is societies built on “infrastructure available for everyone: free education, and healthcare systems that are truly inclusive.” “There is something cultural at stake,” she added, “and I think it’s really important to stay true to the model that you think works for you.” Ressa agreed, and said that the phrase, “innovation” has been used to attack the EU and to entrench surveillance capitalism in places such as her native Phillipines and the United States. “But innovation doesn’t mean you are better,” Ressa said. “It just means you are willing to do things that you know are wrong.”
“Innovation? That is a dog whistle,” added Zuboff, “that says ‘don’t pass any laws.’” What’s really meant by innovation, she said, is preservation of the status quo, enabling the architects of AI get to keep driving toward their commercial objectives. “We are never going to have AI for the public good…as long as this oligopoly owns and operates the entire market structure of artificial intelligence.”
Real innovation begins when the status quo is broken down, she continued. “I promise you, there are millions of people, smart, energetic, talented, creative, like each one of you,” she said, addressing the audience of Harvard students, “and they are waiting in line for their opportunity to do the great stuff with these new technologies that solve for their communities and solve for the key diseases and solve for planet Earth and all the things we really need. And it doesn't have to be in the structures that have been made by the oligopoly. We invent the new structures, and with it, we invent the rights and the laws and the institutions that will keep it all safe in a world of democratic governance.”