AI and adversarial attacks

Vulnerabilities to manipulation

Yaron Singer

Photograph courtesy of Yaron Singer

Return to main article:

The privacy and security issues surrounding big data, the lifeblood of artificial intelligence, are well known: large streams and pools of data make fat targets for hackers. AI systems have an additional vulnerability: inputs can be manipulated in small ways that can completely change decisions. A credit score, for example, might rise significantly if one of the data points used to calculate it were altered only slightly. That’s because computer systems classify each bit of input data in a binary manner, placing it on one side or the other of an imaginary line called a classifier. Perturb the input—say, altering the ratio of debt to total credit—ever so slightly, but just enough to cross that line, and that changes the score calculated by the AI system.

The stakes for making such systems resistant to manipulation are obviously high in many domains, but perhaps especially so in the field of medical imaging. Deep-learning algorithms have already been shown to outperform human doctors in correctly identifying skin cancers. But a recent study from Harvard Medical School coauthored by Nelson professor of biomedical informatics Isaac Kohane (see “Toward Precision Medicine,”  May-June 2015, page 17), together with Andrew Beam and Samuel Finlayson, showed that the addition of a small amount of carefully engineered noise “converts an image that the model correctly classifies as benign into an image that the network is 100 percent confident is malignant.” This kind of manipulation, invisible to the human eye, could lead to nearly undetectable health-insurance fraud in the $3.3-trillion healthcare industry as a duped AI system orders unnecessary treatments. Designing an AI system ethically is not enough—it must also resist unethical human interventions.

Yaron Singer, an associate professor of computer science, studies AI systems’ vulnerabilities to adversarial attacks in order to devise ways to make those systems more robust. One way is to use multiple classifiers. In other words, there is more than one way to draw the line that successfully classifies pixels in a photograph of a school bus as yellow or not yellow. Although the system may ultimately use only one of those classifiers to determine whether the image does contain a school bus, the attacker can’t know which classifier the system is using at any particular moment—and that increases the odds that any attempt at deception will fail.

Singer points out that adding noise (random variations in brightness or color information) to an image is not in itself unethical—it is the uses, not the technology itself, that carry moral force. For example, noise can be used with online postings of personal photographs as a privacy-ensuring measure to defeat machine-driven facial recognition—a self-protective step likely to become more commonplace as consumer-level versions of noise-generating technologies become widely available. On the other hand, as Singer explains, were such identity-obfuscating software already widely available, Italian police would probably not have apprehended a most-wanted fugitive who’d been on the run since 1994. He was caught in 2017, perhaps when a facial recognition program spotted a photo of him at the beach, in sunglasses, on Facebook.

Read more articles by Jonathan Shaw

You might also like

Five Questions with Michèle Duguay

A Harvard scholar of music theory on how streaming services have changed the experience of music

Harvard Faculty Discuss Tenure Denials

New data show a shift in when, in the process, rejections occur

Five Questions with Andrew Knoll

A paleontologist on how to understand Earth’s biggest extinction event

Most popular

The 1884 Cannibalism-at-Sea Case That Still Has Harvard Talking

The Queen v. Dudley and Stephens changed the course of legal history. Here’s why it’s been fodder for countless classroom debates.

Explore More From Current Issue

A girl sits at a desk, flanked by colorful, stylized figures, evoking a whimsical, surreal atmosphere.

The Trouble with Sidechat

No one feels responsible for what happens on Harvard’s anonymous social media app.

A jubilant graduate shouts into a megaphone, surrounded by a cheering crowd.

For Campus Speech, Civility is a Cultural Practice

A former Harvard College dean reviews Princeton President Christopher Eisgruber’s book Terms of Respect.

Lawrence H. Summers, looking serious while speaking at a podium with a microphone.

Harvard in the News

Grade inflation, Epstein files fallout, University database breach