AI is rapidly expanding across government operations, in domains ranging from cybersecurity to intelligence analysis, and soon increasingly capable models may be able to identify and exploit vulnerabilities across both private and public software systems. As these capabilities grow, commercial technology companies and the federal government are positioning themselves for a battle over control.
In late February, AI company Anthropic became embroiled in a conflict with the U.S. Department of Defense over the use of its models in autonomous drone deployment; CEO Dario Amodei took a firm stance against allowing the Pentagon to use its models for autonomous lethal attacks or mass surveillance operations targeting Americans. In response, Defense Secretary Pete Hegseth formally designated Anthropic a “supply chain risk” last month, effectively barring its AI systems from Department of Defense contracts. Anthropic sued the federal government.
Other tensions are beginning to surface between private AI companies and government agencies, including over training models on copyrighted material without permission and the misuse of consumer data.
Jonathan Zittrain, Bemis professor of international law at Harvard Law School and the co-founder and director of Harvard’s Berkman Klein Center for Internet & Society, is an expert on the history and evolution of the internet. In this interview, edited for length and clarity, Zittrain discusses the legal and cybersecurity ramifications of powerful AI models, the relationship between tech giants and the federal government, and the parallels he sees to the early days of the internet.
What conflicts, in terms of control over AI technology, have arisen between the federal government and private tech companies?
It’s surprising there hasn't been more conflict yet! Many transformative technologies (think the interstate highway system, or aviation, or the Manhattan Project, or radio and television broadcasting) have either been conceived and deployed by the federal government, or extensively regulated by it, including through licensing schemes.
AI is closer to the internet’s playbook, where the basic science and experimentation were partially funded by the government, but developed and commercialized in private hands. The government is just another customer. That might be good news to those who think the government gets things wrong a lot, particularly in areas that touch on content and viewpoint in speech—as both social media and LLMs do—but it makes it much more difficult to see where public interest can be accounted for.
How much could Anthropic’s yet-to-be-released Mythos model, which its researchers claim can exploit critical software flaws across much of the internet, change the public/private equation?
It’s hard to say when the model isn’t available for bona fide researchers to pressure test. The fact is that even before LLMs in “agentic” wrappers could probe for new vulnerabilities at scale, cybersecurity had not been “solved”—and it may not, as a practical matter, be solvable. It’s more a matter of the comparative costs of exploiting vulnerabilities and of mounting defenses, whether in software design or for a given individual or firm.
More broadly, the costs of securing software are not currently fully borne by those who write it. That’s normally the kind of market failure government rises to address. If Mythos or a successor makes us all exposed, that could be stimulus for a less hands-off approach, including at the state level.
Editor's note: “Project Glasswing,” a coalition with AWS, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, was launched on April 7 to “help secure the world’s most critical software,” according to Anthropic President Daniela Amodei.
You’ve written about the tension between openness and control in the development of the “generative internet.” What are the parallels between that time and our current AI moment?
When I wrote The Future of the Internet—And How to Stop It, I made the case that the Internet is an extremely unusual technology. If you rewound history and played it back again, it’s much more likely that instead of an internet that anyone can join and build upon without any credentials or approvals, we’d have a more sedate and ordered set of tended corporate or municipal gardens, like the old CompuServes and AOLs.
The open internet was great, but the lack of gatekeeping invited disruption of settled interests (sharing music files was detested by the music industry) and introduced security risks to everyone.
Let’s talk about legal liabilities for emerging tech. Social media companies like Meta and Twitter/X spent years arguing they were “neutral conduits for third-party information,” as opposed to traditional publishers. What was the basis for that stance?
In the U.S., the platform companies’ argument was drawn from federal law dating to the 1990s—which said that most unlawful online content from, for example, posting on social media or commenting in web sites’ comment sections could only be held against the people who wrote them, rather than the platforms that sorted and amplified them to millions of people.
Today, when the major platforms are multi-billion-dollar businesses, that argument is harder to sustain. Platforms ought to be able to pay the costs of finding and limiting unlawful content, and to pay damages when they fail and people are harmed.
AI companies seem to be making a version of the same argument. Does it hold any better the second time around? Or could an AI model maker be liable for whatever its model puts out?
It’s hard to say that AI companies [are producing] third-party content. It originates from the company, after all, especially when the companies are generally claiming that they’re not unduly appropriating others’ content when they scrape books and web pages to train the models, since the models transform the content enough for their outputs to be something new.
I think we’ll see LLM model makers saying—as the social media platforms did before them—that holding them responsible for everything models produce will mean they just can’t run the models cost-effectively anymore, and that society benefits from AI models (even when the outputs can be put to bad or unlawful use).