Harvard Law Professor Explains the AI Battle Between Tech and Government

Jonathan Zittrain compares today’s conflicts to tensions surrounding the early internet.

A graphic of a U.S. federal buildings with

Conflicts between the federal government and the commercial AI sector are increasing. | montage illustration by megan lam / harvard magazine; images by adobe stock

AI is rapidly expanding across government operations, in domains ranging from cybersecurity to intelligence analysis, and soon increasingly capable models may be able to identify and exploit vulnerabilities across both private and public software systems. As these capabilities grow, commercial technology companies and the federal government are positioning themselves for a battle over control.

In late February, AI company Anthropic became embroiled in a conflict with the U.S. Department of Defense over the use of its models in autonomous drone deployment; CEO Dario Amodei took a firm stance against allowing the Pentagon to use its models for autonomous lethal attacks or mass surveillance operations targeting Americans. In response, Defense Secretary Pete Hegseth formally designated Anthropic a “supply chain risk” last month, effectively barring its AI systems from Department of Defense contracts. Anthropic sued the federal government. 

Other tensions are beginning to surface between private AI companies and government agencies, including over training models on copyrighted material without permission and the misuse of consumer data

Jonathan Zittrain, Bemis professor of international law at Harvard Law School and the co-founder and director of Harvard’s Berkman Klein Center for Internet & Society, is an expert on the history and evolution of the internet. In this interview, edited for length and clarity, Zittrain discusses the legal and cybersecurity ramifications of powerful AI models, the relationship between tech giants and the federal government, and the parallels he sees to the early days of the internet.

  1. What conflicts, in terms of control over AI technology, have arisen between the federal government and private tech companies?

It’s surprising there hasn't been more conflict yet! Many transformative technologies (think the interstate highway system, or aviation, or the Manhattan Project, or radio and television broadcasting) have either been conceived and deployed by the federal government, or extensively regulated by it, including through licensing schemes.

AI is closer to the internet’s playbook, where the basic science and experimentation were partially funded by the government, but developed and commercialized in private hands. The government is just another customer. That might be good news to those who think the government gets things wrong a lot, particularly in areas that touch on content and viewpoint in speech—as both social media and LLMs do—but it makes it much more difficult to see where public interest can be accounted for.

  1. How much could Anthropic’s yet-to-be-released Mythos model, which its researchers claim can exploit critical software flaws across much of the internet, change the public/private equation?

It’s hard to say when the model isn’t available for bona fide researchers to pressure test. The fact is that even before LLMs in “agentic” wrappers could probe for new vulnerabilities at scale, cybersecurity had not been “solved”—and it may not, as a practical matter, be solvable. It’s more a matter of the comparative costs of exploiting vulnerabilities and of mounting defenses, whether in software design or for a given individual or firm.

More broadly, the costs of securing software are not currently fully borne by those who write it. That’s normally the kind of market failure government rises to address. If Mythos or a successor makes us all exposed, that could be stimulus for a less hands-off approach, including at the state level.

Editor's note: “Project Glasswing,” a coalition with AWS, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, was launched on April 7 to “help secure the world’s most critical software,” according to Anthropic President Daniela Amodei.

  1. You’ve written about the tension between openness and control in the development of the “generative internet.” What are the parallels between that time and our current AI moment?

When I wrote The Future of the Internet—And How to Stop It, I made the case that the Internet is an extremely unusual technology. If you rewound history and played it back again, it’s much more likely that instead of an internet that anyone can join and build upon without any credentials or approvals, we’d have a more sedate and ordered set of tended corporate or municipal gardens, like the old CompuServes and AOLs.

The open internet was great, but the lack of gatekeeping invited disruption of settled interests (sharing music files was detested by the music industry) and introduced security risks to everyone.

  1. Let’s talk about legal liabilities for emerging tech. Social media companies like Meta and Twitter/X spent years arguing they were “neutral conduits for third-party information,” as opposed to traditional publishers. What was the basis for that stance? 

In the U.S., the platform companies’ argument was drawn from federal law dating to the 1990s—which said that most unlawful online content from, for example, posting on social media or commenting in web sites’ comment sections could only be held against the people who wrote them, rather than the platforms that sorted and amplified them to millions of people.

Today, when the major platforms are multi-billion-dollar businesses, that argument is harder to sustain. Platforms ought to be able to pay the costs of finding and limiting unlawful content, and to pay damages when they fail and people are harmed.

  1. AI companies seem to be making a version of the same argument. Does it hold any better the second time around? Or could an AI model maker be liable for whatever its model puts out?

It’s hard to say that AI companies [are producing] third-party content. It originates from the company, after all, especially when the companies are generally claiming that they’re not unduly appropriating others’ content when they scrape books and web pages to train the models, since the models transform the content enough for their outputs to be something new.

I think we’ll see LLM model makers saying—as the social media platforms did before them—that holding them responsible for everything models produce will mean they just can’t run the models cost-effectively anymore, and that society benefits from AI models (even when the outputs can be put to bad or unlawful use).

Read more articles by Olivia Farrar
Related topics

You might also like

Why Is Silicon Valley Turning Conservative?

At the Harvard Kennedy School, Van Jones analyzes how Democrats lost the tech industry’s vote.

AI Hunts For Stolen Harvard Coins

A museum curator and a computer scientist track down ancient coins taken in a legendary heist.

Is Copyright Law the Wrong Weapon Against AI?

Harvard law professor Rebecca Tushnet explains how “fair use” applies to LLMs.

Most popular

AI Outperforms Doctors in Emergency Room Tasks, New Harvard Study Shows

Researchers say the technology could help physicians with triage, diagnosis.

Harvard Alumni and Faculty Win Six Pulitzer Prizes

Winners include Jill Lepore, Bess Wohl, Pablo Torre, and Hannah Natanson.

Martin Nowak Placed on Leave a Second Time

Further links to Jeffrey Epstein surface in newly released files.

Explore More From Current Issue

Four stylized magnifying glasses arranged in a gradient background with abstract patterns.

AI Hunts For Stolen Harvard Coins

A museum curator and a computer scientist track down ancient coins taken in a legendary heist.

A woman with long hair leans on a table, looking out a large window with rain-streaked glass.

A Harvard Economist Probes the Affordable Housing Crisis

From understanding gender pay gaps to the housing crisis, Rebecca Diamond’s research aims to improve lives.

Historical scene in colonial Boston depicting British soldiers confronting civilians, with smoke rising, in a city street.

Houghton Library Displays Revolution-era News and Propaganda

A new exhibit reveals how early Americans learned about the war.