AI and Adversarial Attacks

Vulnerabilities to manipulation

Yaron Singer

Photograph courtesy of Yaron Singer

Return to main article:

The privacy and security issues surrounding big data, the lifeblood of artificial intelligence, are well known: large streams and pools of data make fat targets for hackers. AI systems have an additional vulnerability: inputs can be manipulated in small ways that can completely change decisions. A credit score, for example, might rise significantly if one of the data points used to calculate it were altered only slightly. That’s because computer systems classify each bit of input data in a binary manner, placing it on one side or the other of an imaginary line called a classifier. Perturb the input—say, altering the ratio of debt to total credit—ever so slightly, but just enough to cross that line, and that changes the score calculated by the AI system.

The stakes for making such systems resistant to manipulation are obviously high in many domains, but perhaps especially so in the field of medical imaging. Deep-learning algorithms have already been shown to outperform human doctors in correctly identifying skin cancers. But a recent study from Harvard Medical School coauthored by Nelson professor of biomedical informatics Isaac Kohane (see “Toward Precision Medicine,”  May-June 2015, page 17), together with Andrew Beam and Samuel Finlayson, showed that the addition of a small amount of carefully engineered noise “converts an image that the model correctly classifies as benign into an image that the network is 100 percent confident is malignant.” This kind of manipulation, invisible to the human eye, could lead to nearly undetectable health-insurance fraud in the $3.3-trillion healthcare industry as a duped AI system orders unnecessary treatments. Designing an AI system ethically is not enough—it must also resist unethical human interventions.

Yaron Singer, an associate professor of computer science, studies AI systems’ vulnerabilities to adversarial attacks in order to devise ways to make those systems more robust. One way is to use multiple classifiers. In other words, there is more than one way to draw the line that successfully classifies pixels in a photograph of a school bus as yellow or not yellow. Although the system may ultimately use only one of those classifiers to determine whether the image does contain a school bus, the attacker can’t know which classifier the system is using at any particular moment—and that increases the odds that any attempt at deception will fail.

Singer points out that adding noise (random variations in brightness or color information) to an image is not in itself unethical—it is the uses, not the technology itself, that carry moral force. For example, noise can be used with online postings of personal photographs as a privacy-ensuring measure to defeat machine-driven facial recognition—a self-protective step likely to become more commonplace as consumer-level versions of noise-generating technologies become widely available. On the other hand, as Singer explains, were such identity-obfuscating software already widely available, Italian police would probably not have apprehended a most-wanted fugitive who’d been on the run since 1994. He was caught in 2017, perhaps when a facial recognition program spotted a photo of him at the beach, in sunglasses, on Facebook.

Click here for the January-February 2019 issue table of contents

Read more articles by Jonathan Shaw

You might also like

This Astronomer is Sounding a Warning on 'Space Junk'

As debris accumulates in low Earth orbit, the danger of destructive collisions continues to rise.

Isaac Kohlberg to Step Down as Head of Harvard Technology Development

Partnerships and licensing office could become more critical as funding cuts loom

A Space-Age Project for Harvard’s Plant Collection

Light-based analysis of botanical collections link plants to Earth’s changing climate.

Most popular

Harvard, Government Present Arguments in Funding Case

Judge questions relationship between antisemitism charges and grant cancellations.

International Scholars and Students Targeted—Again

Secretary of State Marco Rubio announces fresh investigation into Harvard’s participation in the Exchange Visitor Program.

The Risks of Low-Dose Radiation Exposure

Continuing fallout from the Manhattan Project

Explore More From Current Issue

Harvard Summer Reading Picks | 2025

The wealth gap, shamanism, the life of David Nathan, and more

Julia Rooney’s Cyanotype Art At Harvard

Julia Rooney’s paintings cross the analog-digital divide.