Controlling AI Influence over Consumers

illustration in which six consumers are presented by an AI-driven algorithm with different prices for the same product, a jacket

Consumer-facing artificial intelligence—algorithms that adjust product pricing based on what they know about a buyer—could, in some circumstances, inappropriately take advantage of the public, argue a pair of Harvard Law School scholars who are assessing the impacts. Friedman professor of law and economics Oren Bar-Gill and Walmsley University Professor Cass Sunstein (see “The Legal Olympian,” January-February 2015, page 43) argue that such a system’s ability to do harm or good depends on which aspects of consumer behavior it is directed to target.

Companies deploy retail artificial intelligence (AI), says Bar-Gill, to figure out how much a customer is willing to pay for a good or service. The AI then aims to adjust the price up or down to just below that price point. There are three components, he says, of a customer’s “willingness to pay.” First is income level—richer buyers are able to pay more. Second is a customer’s interest in the product—someone who values an item more is willing to pay more to acquire it. The third component is a consumer’s misperceptions and biases—less savvy consumers can be duped by complicated terms or delayed payments. When an algorithm collects individualized information (someone’s location, wardrobe preferences, or fears), it can categorize a consumer and tailor the price accordingly.

An AI that utilizes those first two components may benefit consumers. Bar-Gill says charging consumers based on their interest may enable consumers who derive less benefit from the product to purchase it for less since they would not buy it at full price. Adjusting price for income—lowering it for customers with lesser means and raising it for richer ones—is acceptable, he says, because “it allows poor consumers to purchase the product.” Sellers still make a profit, he points out. Bar-Gill considers these two kinds of price adjustment reasonable because they expand the customer base and allow more people to buy the item at a price that aligns with how they value it—and with their ability to pay for it.

But manipulating price based on what a consumer knows could be exploitative and abusive—and may be illegal in some circumstances, as Bar-Gill, Sunstein, and Inbal Talgam-Cohen of Technion Israel Institute of Technology described in a recent paper on algorithmic harms. “Fine-tuning of the price is not necessarily bad…” Bar-Gill says. “But if a major component of the willingness to pay is our misperceptions or biases—when I am charged a higher price not because I’m richer or because I liked this product more than you do, but just because I suffered from a greater cognitive bias than you do—then the harm to consumers is significantly larger.”

Using certain types of information to manipulate prices is already illegal. Raising prices upon learning that someone is part of a distinguishable “protected class”—categorized by race, skin color, religion, sex, or national origin—violates existing civil rights laws. And consumer protection laws prohibit “unfair or deceptive acts” and “abusive” practices, which Bar-Gill says may extend to companies charging higher prices to less sophisticated customers. But it is not yet clear if or how the government will protect consumers from exploitative algorithms.

He and Sunstein, former head of the White House Office of Information and Regulatory Affairs during the Obama administration, now lead a Law School initiative focused on the intersection of artificial intelligence and the law. The initiative brings together HLS’s AI-related work, encouraging research, sponsoring symposia, and publishing findings. “We hope to see 10,000 flowers bloom,” says Sunstein. A 2023 paper titled “Algorithmic Harm in Consumer Markets” and its forthcoming expansion into a book are among the initiative’s first projects.

Bar-Gill began thinking about how companies exploit flawed human reasoning long before the recent AI revolution. His 2012 book, Seduction by Contract: Law, Economics, and Psychology in Consumer Markets, analyzes how companies such as cell phone providers and mortgage lenders “deliberately design their products, contracts, and pricing structures” to take advantage of cognitive limitations, he says. By designing complex pricing models with many deferred costs, companies may offer consumers a product they do not understand. Artificial intelligence, Bar-Gill fears, could supercharge the ways humans deceive each other. “The algorithms are now not just taking the bias and irrationalities they are given,” he says. “They currently are…and will quickly play a much larger role in creating and exacerbating biases.” Knowing that someone is present-biased, for instance, an algorithm may inundate the consumer with advertisements for floating rate mortgages that are often cheap in the short term but costly over the life of the loan.

Bar-Gill teaches introductory contract law each fall. The cases used in those classes take place between “very sophisticated parties” with “highly paid lawyers and bankers,” he says, so the litigants tend to operate perfectly rationally. But studying consumer markets showed him that normal people do not operate like corporations. He began researching “how the law can intervene to help avoid these types of exploitation.”

Ideally, says Bar-Gill, companies could show regulators the component parts of their algorithms. Regulators could then identify precisely the practices that lead to consumer exploitation. But achieving such AI transparency is difficult, he says: some companies utilize “black box algorithms,” where even the programmers are not sure how the algorithm manipulates inputs to generate outputs. “If it is true that sellers are using these machine-learning black box algorithms and they themselves don’t know exactly how the algorithm is pricing,” he says, “then in order for the law to intervene, we have to… start with the initial step of trying to create this transparency.” But Bar-Gill says it’s unclear whether these algorithms can actually be made transparent.

Absent the ability to analyze the underlying AI algorithms, Sunstein and Bar-Gill have proposed some workaround consumer protections that could be implemented through improved enforcement of existing law or new regulation. Programmers could be required to hardwire some restrictions into their code specifying which groups can have different prices (wealth, interest) and which cannot (sophistication, race, gender). Alternatively, regulators could borrow from anti-discrimination laws and analyze an algorithm’s “disparate impact.” “If we can show just by looking at the outcomes,” says Bar-Gill, “that consumers who are more sophisticated pay lower prices than the consumers who are less sophisticated, then that is in itself sufficient to trigger a legal scrutiny, even if we don’t know exactly what [data] the algorithm is using.”

Click here for the March-April 2024 issue table of contents

Read more articles by Max J. Krupnick

You might also like

Antisemitism on Campuses

Jewish studies faculty react to a year of turmoil. 

Can Multivitamins Reduce Cancer Risk and Slow Memory Loss?

The scientific evidence on vitamin supplements and age-related health decline

Five Questions with Professor Jia Liu

Harvard bioengineer on AI in brain-machine interfaces, and using technology to treat disease

Most popular

The World’s Costliest Health Care

Administrative costs, greed, overutilization—can these drivers of U.S. medical costs be curbed?

Antisemitism on Campuses

Jewish studies faculty react to a year of turmoil. 

Five Questions with Professor Jia Liu

Harvard bioengineer on AI in brain-machine interfaces, and using technology to treat disease

Explore More From Current Issue

Do Ivy League Athletes Outperform in Careers?

How does undergraduate participation in varsity sports enhance career success?