Your independent source for Harvard news since 1898 | SUBSCRIBE

Your independent source for Harvard news since 1898

Research

AI and Adversarial Attacks

Vulnerabilities to manipulation

January-February 2019

Yaron Singer

Photograph courtesy of Yaron Singer


Yaron Singer

Photograph courtesy of Yaron Singer

The privacy and security issues surrounding big data, the lifeblood of artificial intelligence, are well known: large streams and pools of data make fat targets for hackers. AI systems have an additional vulnerability: inputs can be manipulated in small ways that can completely change decisions. A credit score, for example, might rise significantly if one of the data points used to calculate it were altered only slightly. That’s because computer systems classify each bit of input data in a binary manner, placing it on one side or the other of an imaginary line called a classifier. Perturb the input—say, altering the ratio of debt to total credit—ever so slightly, but just enough to cross that line, and that changes the score calculated by the AI system.

The stakes for making such systems resistant to manipulation are obviously high in many domains, but perhaps especially so in the field of medical imaging. Deep-learning algorithms have already been shown to outperform human doctors in correctly identifying skin cancers. But a recent study from Harvard Medical School coauthored by Nelson professor of biomedical informatics Isaac Kohane (see “Toward Precision Medicine,”  May-June 2015, page 17), together with Andrew Beam and Samuel Finlayson, showed that the addition of a small amount of carefully engineered noise “converts an image that the model correctly classifies as benign into an image that the network is 100 percent confident is malignant.” This kind of manipulation, invisible to the human eye, could lead to nearly undetectable health-insurance fraud in the $3.3-trillion healthcare industry as a duped AI system orders unnecessary treatments. Designing an AI system ethically is not enough—it must also resist unethical human interventions.

Yaron Singer, an associate professor of computer science, studies AI systems’ vulnerabilities to adversarial attacks in order to devise ways to make those systems more robust. One way is to use multiple classifiers. In other words, there is more than one way to draw the line that successfully classifies pixels in a photograph of a school bus as yellow or not yellow. Although the system may ultimately use only one of those classifiers to determine whether the image does contain a school bus, the attacker can’t know which classifier the system is using at any particular moment—and that increases the odds that any attempt at deception will fail.

Singer points out that adding noise (random variations in brightness or color information) to an image is not in itself unethical—it is the uses, not the technology itself, that carry moral force. For example, noise can be used with online postings of personal photographs as a privacy-ensuring measure to defeat machine-driven facial recognition—a self-protective step likely to become more commonplace as consumer-level versions of noise-generating technologies become widely available. On the other hand, as Singer explains, were such identity-obfuscating software already widely available, Italian police would probably not have apprehended a most-wanted fugitive who’d been on the run since 1994. He was caught in 2017, perhaps when a facial recognition program spotted a photo of him at the beach, in sunglasses, on Facebook.

You Might Also Like:

Hvalsey Church

Hvalsey Church

Photograph in the public domain

What Happened to the Vikings in Greenland?

Photograph of giant tube worms living near a hydrothermal vent in the Gulf of California

Giant tubeworms take up chemicals from a hydrothermal vent 6,200 feet deep in the Gulf of California (the Girguis lab is world-renowned for research on these worms).

Photograph courtesy of the Schmidt Ocean Institute

Plumbing the Deep Sea

Illustration of a dollar bill with a head on one side and a heart on the other, exchanging hands

Illustration by Paul Garland

Making Charitable Giving More Competent

You Might Also Like:

Hvalsey Church

Hvalsey Church

Photograph in the public domain

What Happened to the Vikings in Greenland?

Photograph of giant tube worms living near a hydrothermal vent in the Gulf of California

Giant tubeworms take up chemicals from a hydrothermal vent 6,200 feet deep in the Gulf of California (the Girguis lab is world-renowned for research on these worms).

Photograph courtesy of the Schmidt Ocean Institute

Plumbing the Deep Sea

Illustration of a dollar bill with a head on one side and a heart on the other, exchanging hands

Illustration by Paul Garland

Making Charitable Giving More Competent