The Disinformation Dilemma

Illustration by Doug Panton

Email Dipayan Ghosh

In the discussion of how Russian operatives manipulated public opinion during the 2016 presidential election, it’s easy to overlook how their malicious goals were easily advanced thanks to tools originally designed to further the economic interests of leading Internet companies like Facebook and Google.

Dipayan Ghosh, a fellow at the Kennedy School’s Shorenstein Center on Media, Politics and Public Policy who previously worked at Facebook on privacy and public policy and consulted on the Clinton campaign, spent the months immediately following the election researching how Russian disinformation campaigns had used tools such as search engine optimization, behavioral data collection, and social media management software (SMMS) to spread and promote “fake news” widely online. He teamed with Ben Scott, a senior adviser to the nonprofit Open Technology Institute at New America and a fellow adviser to the Clinton campaign, to raise awareness about these abuses by publishing a paper, “Digital Deceit: The Technologies Behind Precision Propaganda on The Internet,” with New America in January.

Disinformation agents were fundamentally successful, Ghosh says, because they were able to tap into the lifeblood of the modern digital advertising landscape—behavioral data. Those data exist because websites compile every click, share, and search query into a user profile. One way to do this is by using a “cookie,” a piece of data that tracks users’ activity in order to predict their preferences and interests. Advertisers use these inferred preferences to show users advertisements in line with those interests, like hiking boots instead of high heels. It seems a harmless, mutually beneficial marketplace, in which users are exposed to the kinds of content that they want to see and advertisers are able to generate revenue.

But Ghosh says that this practice of constant mass data collection also provides ample opportunities for disinformation agents to manipulate users’ experiences in the political landscape. Location data collected through apps and sites, for example, can be used by a disinformation campaign to determine where a voter lives, in order to tailor ads to races and hot-button issues for that specific region.

After using Internet data to determine what kinds of propagandized messages might speak to specific audiences, disinformation campaigns can also synchronize their efforts across platforms such as Twitter, Facebook, and Instagram through the use of SMMS. Such software helps brands schedule and select the kinds of content they wish to promote to certain audiences. Ghosh emphasizes that these tools are not inherently malicious—they help advertisers connect with consumers with less effort and more frequent success by reinforcing messages across media. But a political disinformation agent could just as easily use the software to push a fake story on multiple platforms while simultaneously tailoring each iteration of the story by using data on what is most likely to persuade specific audience segments. In cases like these, SMMS makes disseminating destabilizing rumors and sensationalized stories faster and easier.

One of the easiest ways to detect manipulation of search results from providers such as Google is to watch for instances where content from less credible sources ranks above that from well-established outlets. Foreign agents in 2016 used so-called black-hat (as in old Westerns) search engine optimization techniques to understand, replicate, and ultimately trick Google’s algorithm into promoting their propagandized content to the top of search results. Ghosh says there’s a scale issue in fighting such challenges. Even if Google wanted to “throw its entire security team at this problem” it couldn’t, because “the number of black-hat SEO attacks per security person at Google is just not a ratio in Google’s favor.” For this reason, he encourages companies to adopt “bug-bounty” programs that financially reward people outside the organization who can figure out ways to push disinformation through the existing system—thus pinpointing loopholes and security issues that companies can fix. “It’s throwing money at the problem,” Ghosh says, “which is really something we have to get more comfortable with doing.”

He and Scott offer a number of technical solutions to help ensure that SMMS companies, Internet platforms, and advertisers head into the 2018 and 2020 elections with more control over misuse of their digital toolkits. But in the effort to promote policy change and push Internet companies to implement better security processes, Ghosh believes primarily in the power of public opinion. “The best way we can raise awareness” about how “the threat of disinformation can linger on these platforms, and surface at the most critical times in our national history, is by talking about and writing about it,” he says. “I’m talking about the pitchforks coming out.”

Click here for the May-June 2018 issue table of contents

Read more articles by Oset Babür

You might also like

Five Questions with Professor Jia Liu

Harvard bioengineer on AI in brain-machine interfaces, and using technology to treat disease.

President Garber’s Quiet Installation

A private ceremony celebrated Garber’s appointment as president.

A Ministry of Presence

Capuchin friars bring food and supplies to Harvard Square’s homeless.

Most popular

The World’s Costliest Health Care

Administrative costs, greed, overutilization—can these drivers of U.S. medical costs be curbed?

Home Unaffordable Home

America’s housing problem—and what to do about it

Five Questions with Professor Jia Liu

Harvard bioengineer on AI in brain-machine interfaces, and using technology to treat disease.

Explore More From Current Issue

Do Ivy League Athletes Outperform in Careers?

How does undergraduate participation in varsity sports enhance career success?