At a December conference at the Harvard Kennedy School (HKS) Ash Center for Democratic Governance and Innovation, titled “After Neoliberalism: From Left to Right,” scholars and policymakers examined how social media, AI, and digital platforms are reshaping markets, labor, and even the way the United States is governed. Across panels, one question surfaced repeatedly: how should societies develop policy around technologies that may, unlike most other goods and services, have the capacity to undermine democracy itself?
Emerging technology is the “most consequential driver of change in our economics and politics,” HKS Dean of Faculty Jeremy Weinstein said in remarks preceding the afternoon panel series. Weinstein, the Price professor of public policy, said society must confront tech policy with constant, active involvement—echoing themes from his co-authored book System Error: Where Big Tech Went Wrong and How We Can Reboot (2021).
Neoliberalism, first associated with economists like Milton Friedman and Friedrich Hayek in the 1930s, emphasizes free markets, deregulation, privatization, and limited government. Its central premise is that competitive markets allocate resources efficiently, spur innovation, and serve the public better than direct government intervention.
Digital technologies increasingly test those assumptions, many panelists said. Even before the rapid acceleration of AI in the early 2020s, social media exposed the challenges of market self-regulation. Policymakers confronted difficult tradeoffs between innovation, public safety, and democracy, while many tech CEOs treated innovation as the morally neutral goal.
But several panelists at “After Neoliberalism” argued that technology is not a neutral tool, but a force that actively shapes political and social life. Social media platforms such as Facebook, Instagram, and WhatsApp remain far less regulated than even basic consumer goods. In the panel titled, “Applications of a New Paradigm: Technology Development,” MIT economist Daron Acemoglu argued that those who control data increasingly shape social outcomes, including through surveillance and political manipulation.
Some of the conversations centered on whether the wealth and market power held by a handful of technology firms represent a broader democratic failure. During the panel “Where Do We Go From Here: From Left to Right,” Matt Stoller, the director of research at the American Economic Liberties Project, argued that the U.S. economy is effectively propped up by the stock market and its incentives. He pointed to mounting evidence that Meta profits from ad fraud while tolerating serious harms to users (including trafficking) with little consequence. In fact, a December 2025 Reuters special report revealed that Chinese advertisers had defrauded Facebook, Instagram, and WhatsApp users worldwide and that, according to internal documents, Meta was slow to implement safeguards that could have reduced its revenue.
The government’s role in regulating technology, Stoller argued, has focused on maximizing capital efficiency rather than protecting citizens—jeopardizing “the brains of our children, and frankly, me.”
Allison Schrager, a senior fellow at the Manhattan Institute, pushed back, saying the success of technology companies expands private wealth and personal savings for millions of Americans. “Sixty percent of the population does own stock,” she said. “It’s not like [Meta CEO] Mark Zuckerberg is the only equity holder.”
In the past year, the Trump administration has aimed to further limit federal oversight of technology platforms, citing the need for these platforms to compete organically. In January 2025, U.S. President Donald Trump signed Executive Order 14149, “Restoring Freedom of Speech and Ending Federal Censorship,” further reducing federal involvement in social media governance.
The same fault lines now extend even more sharply into debates over artificial intelligence, intensifying questions about whether governments should, or even can, regulate rapidly evolving technologies. On December 14, Trump signed Executive Order 14179, “Ensuring a National Policy Framework for Artificial Intelligence,” which preempts state-level regulation.
“We are in a race with adversaries for supremacy,” the order states, arguing that U.S. AI companies must be free to innovate “without cumbersome regulation.” Excessive state oversight, it claims, threatens national competitiveness.
Many panelists at “After Neoliberalism” noted the risks surrounding neoliberal technology policy. Weinstein referred to a report from the Federal Reserve Bank of Dallas this past summer that, he said, “demonstrated just how little we understand what’s ahead of us.” Trying to model the impacts of AI, the report predicted outcomes that “range from unprecedented productivity growth to total extinction,” Weinstein said.
Acemoglu argued that governance for tech is messy but should guide innovation without smothering it.
Zoë Kettler Hitzig, a research scientist at OpenAI, discussed what she called the “alignment problem”: the challenge of ensuring that powerful large language models (LLMs) act in ways that benefit humanity. She said AI research increasingly attempts to accommodate plural and often conflicting values, but that society lacks “property-like institutions” for the digital world.
For example, she said, offline, ownership is intuitive: “If I left my coat in the room after the panel, it would be considered weird for one of you to walk away with it.” By contrast, online, data, images, and identities circulate “without clear norms,” making the potential for exploitation easy.
Emma Waters, a policy analyst at the Heritage Foundation’s Center for Technology and the Human Person, challenged what she called the “myth of moral neutrality” in American public life. For decades, Waters argued, ethical and religious commitments have been bracketed out of policy debates in favor of a supposedly “neutral public square.” Technology policy, she said, follows the same pattern: build first, consult ethicists later. She urged policymakers to evaluate technologies not only by what they enable, but by the moral and social consequences they impose.