Proactive AI Policy

Businesses should start self-regulating before government intervention, argue Harvard professors.

At a May 7 conference, "Leading with AI," at Harvard Business School, many speakers argued that businesses will have to fundamentally shift how they approach labor, upskilling, marketing, and more in the age of AI. | MONTAGE ILLUSTRATION BY NIKO YAITANES/HARVARD MAGAZINE; PHOTOGRAPHS BY ADOBE STOCK, UNSPLASH

Artificial intelligence is developing faster than policymakers can keep up, and the gap in understanding between government and industry could pose challenges for effective regulation. But that doesn’t mean corporations should develop and deploy AI as they please until the government catches up, cautioned speakers discussing AI regulation at a May 7 “Leading with AI” conference at Harvard Business School (HBS). Instead, panelists urged businesses to take proactive steps in self-regulating and preparing for government regulation.

“You need disclosure through [a company] as things go wrong, and internal transparency that will prepare you for when the government starts to mandate transparency,” said Noah Feldman, Frankfurter professor of law (who advised on Facebook’s self-regulatory “oversight board”). “It’s coming. There’s never been a regulatory regime that hasn’t relied on transparency as one of its crucial tools.”

At the conference—presented by HBS Alumni Relations and the Harvard Digital Data Design Institute, a lab dedicated to the study of technology and business—many speakers argued that businesses will have to fundamentally shift how they approach labor, upskilling, marketing, and more in the age of AI. “This shift is not incremental,” said Fitzhugh professor of business administration Tsedal Neeley. “It’s not tweaking existing structures and processes. This is introducing radical changes, new systems, new processes, new structures.” HBS has committed to studying and preparing students for these changes, said Mitchell Weiss, MBA ’04, Menschel professor of management practice. Last fall, HBS became “maybe the only business school to require that every student have GPT Plus,” which provides access to ChatGPT’s most advanced model—“and we paid for it,” he said.

One of those fundamental shifts will take place in the relationship between business and technology, speakers at the conference said. As corporations make decisions about how to self-regulate and maintain records of AI usage and development, “we can no longer sit in business siloes,” said Anita Lynch, M.B.A. ’08, a board member for Nasdaq U.S. Exchanges. “You can’t be a business leader and not understand the technology anymore.”

Speakers also anticipated the form that government regulation might take. In a Bloomberg article last year, Feldman laid out possible approaches. The most extreme would be nationalization of the AI industry—a path few in the United States endorse. More moderate options, he continued, include criminal codes, statutory civil regulation, and administrative rules. In these more moderate paths forward, a separate regulatory agency overseeing AI is unlikely, he said at the conference: existing “regulatory agencies won’t just hand over their authority.” Instead, he foresees existing agencies taking over relevant areas of oversight.

Feldman also outlined an additional option: self-regulation by the industry. This option has already been pursued to some extent: in July 2023, companies including Meta and Google voluntarily committed to AI safety standards—though there are “no hard gating factors to stop them from doing things differently,” Lynch said. Moving forward, she added, governmental regulation will set a “minimum standard”—but self-regulation and higher internal standards that companies can begin to establish now will remain important.

Lynch pointed to another technology that could provide guidance as regulators face the unknown. The Stanford computer scientist Andrew Ng has compared AI to electricity—and regulation in that industry is strict yet flexible as AI regulation should be, Lynch said. “It’s hard to think about the different ways [electricity] could be used or abused, but there are basic safety protocols,” she said. With AI, too, “There are industry-specific uses that are going to be tough to anticipate, and therefore to regulate. But that doesn’t mean we shouldn’t attempt to, and that doesn’t mean there’s no hope in trying to.”

Those attempts may require some experimentation—especially now, in the present era of little to no regulation. Bemis professor of international law and professor of computer science Jonathan Zittrain, faculty director of the Berkman Klein Center for Internet & Society, floated policies that would encourage data collection and disclosure. For instance, Zittrain asked, how can the government incentivize disclosure of “something that’s not clearly illegal—because there hasn’t been time to identify and pass every conceivable protective law—but is a little dodgy?” If the corporation discloses that activity, he continued, “maybe that should occasion a cap for damages.” A policy like this shifts the incentives for the company: “I’m going to come forward, give society a chance to grapple with this, and I’ll be protected.”

Amid this experimentation, Lynch said, it’s important to educate policymakers so they have the foundations to make such decisions. “Despite initiatives trying to get more tech and product people” involved in government, “I don’t know that they’ll ever catch up. So we’ll have to come up with a mechanism that works in spite of the lack of supply,” she continued. “We’re going to have to educate people more.”

It’s necessary not only to decide on the exact form of regulation, but also the purpose, added Zittrain, who has taught a course on ethics and AI. “You have to have a destination, regulatorily speaking, in mind,” he said. “I don’t know if we do.” Currently, companies such as OpenAI “align” their models with human values during the training process to ensure that they don’t produce offensive or harmful outputs. If users ask ChatGPT how to commit a crime, for example, the bot is trained to respond “in a friendly but ultimately scolding voice” that it can’t provide that information, Zittrain said. But, he asked, are we sure such guardrails are necessary—and don’t conflict with free speech principles?

“I find myself worrying more about the fine-tuning of these models for safety than I do about the recondite uncertainty about what they’re going to say to begin with,” he said. “The process of alignment happens totally internally, and we don’t know—except for what [companies] choose to share—what that alignment looks like.”

Working through these questions should be a societal process, speakers said—not one confined to tech companies and Congress. And this process offers the chance to fundamentally shift how we think about regulating the Internet. Now, Zittrain said, our system is like a cocktail olive: a big green oval with a small portion of red; in other words, “everything not forbidden is permitted.” What if we swap that logic, he asked, and everything not permitted is prohibited?

As AI continues to progress, it’s important for regulators to keep big questions like this in mind alongside concerns about regulating specific technological advances. “From a regulator point of view,” Zittrain said, “figuring out how to…[keep] the wheels on the bus as the bus is roaring along, while also thinking about the map—that’s the important thing.”

Read more articles by: Nina Pasquini

You might also like

Close Call

Ending a tumultuous year, Harvard tradition is served in the 373rd Commencement—with plenty of thunder from the stage.

Photographs from Commencement Week 2024

A gallery of photographs from the Commencement celebration for the class of 2024

Protesters Walk Out of Harvard Commencement

Pro-Palestine activists hold “The People’s Commencement”

Most popular

Harvard Corporation Rules Thirteen Students Cannot Graduate

Faculty of Arts and Sciences May 20 vote on protestors’ status does not confer “good standing.”

Harvard Confers Six Honorary Degrees

Nobel laureate Maria Ressa, conductor Gustavo Dudamel, President emeritus Larry Bacow among those recognized

Protesters Walk Out of Harvard Commencement

Pro-Palestine activists hold “The People’s Commencement”

More to explore

Bernini’s Model Masterpieces at the Harvard Art Museums

Thirteen sculptures from Gian Lorenzo Bernini at Harvard Art Museums.

Private Equity in Medicine and the Quality of Care

Hundreds of U.S. hospitals are owned by private equity firms—does monetizing medicine affect the quality of care?

Sasha the Harvard Police Dog

Sasha, the police dog of Harvard University