Foreseeing Self-Harm

Illustration by Davide Bonazzi

Psychology professor Matthew Nock has spent his career studying self-harm, but he remains humbled by how little is yet understood about why people kill themselves. Suicide is the tenth highest cause of death in the United States, and the rate remained roughly steady across the population for the last century, before rising somewhat during the last few decades.

Academic theories of suicide emerged in the nineteenth century. Émile Durkheim wrote about social determinants of suicide in his foundational (though now controversial) text on the differences in suicide rates among Protestants and Catholics in Europe. Freud thought depression and suicide reflected inwardly directed anger. As psychology became the domain of empirical research, clinicians came to rely on factors correlated with suicide—like depression, poor impulse control, or substance abuse—to determine whether a patient was at risk. But a recent review of several hundred studies of suicidal thoughts and behaviors during the last 50 years, co-authored by Nock and a team of fellow scholars in the Psychological Bulletin, finds that risk factors have been virtually no better than random guesses at predicting suicide.

One shortcoming of traditional risk factors is that they require clinicians to rely on self-reported information from patients. What if patients aren’t forthcoming because they don’t want to be hospitalized, or are unable to report their emotional states? The bigger problem, Nock explains, is that each factor individually contributes so little to suicide risk. Depression, for example, may be correlated with suicide, but the proportion of patients with depression who attempt suicide is still vanishingly small. The clinical human brain, Nock continues, “isn’t well prepared to assess dozens of risk factors at a time, weigh them all, and then combine those weights into one probability that a person is going to attempt suicide. So clinicians will focus on one or two risk factors, or they’ll ask a patient, ‘Are you thinking about hurting yourself?’ and just rely on that and forget all the risk factors.”

The predictive failure of individual risk factors may be linked with psychologist Thomas Joiner’s theory of suicide. He has argued that suicide risk depends not just on the will to die, but also on an additional “acquired capability” to kill oneself: the ability to overcome the fear of death through previous experiences of one’s own or another’s trauma, or intentional self-harm.

To comb through the many factors contributing to suicide risk in a more systematic way, Nock (profiled in “A Tragedy and a Mystery,” January-February 2011, page 32) and colleagues have been working on a new approach that uses a computer algorithm. Last September, they published the results of an early algorithm, not yet ready for clinical use, developed using health records from the Partners Healthcare system. The program scanned 1.72 million electronic medical records for every medical code—age, sex, number of doctor’s visits, and each illness or health complaint—that might predict suicide. The resulting model predicted 45 percent of the actual suicide attempts, on average three to four years in advance.

“Not surprisingly, codes like depression and substance abuse have big effects,” Nock reports, but so do “non-mental-health codes related to gastrointestinal problems, getting cuts, and accidentally taking too much of a certain drug. What we may be picking up,” he suggests, “is a sort of practice to make a suicide attempt, or low-level non-lethal suicide attempts that don’t get coded as such.”

The catch, though, is that the model also picked up a lot of false positives: the vast majority of those identified as at risk for suicide did not make a suicide attempt. And even though the algorithm’s success rate was high relative to existing methods, it still failed to predict the majority of attempts. In the future, the paper suggests, the model might be improved by accounting for the compounding effects of multiple factors. How might drug addiction, for example, affect suicide risk differently when coupled with a patient’s age or mental health? And some element of suicide risk might simply be random, he admits, or not captured by medical codes.

Nock doesn’t claim that a computer algorithm can replace in-person treatment, but the model represents one of the many ways in which medicine is being transformed by machine learning. Algorithms, this work suggests, can be applied not just to diagnose suicide risk but also in other clinical domains where human judgment fails.

Click here for the March-April 2017 issue table of contents

Read more articles by Marina N. Bolotnikova

You might also like

Five Questions with Professor Jia Liu

Harvard bioengineer on AI in brain-machine interfaces, and using technology to treat disease.

President Garber’s Quiet Installation

A private ceremony celebrated Garber’s appointment as president.

A Ministry of Presence

Capuchin friars bring food and supplies to Harvard Square’s homeless.

Most popular

The World’s Costliest Health Care

Administrative costs, greed, overutilization—can these drivers of U.S. medical costs be curbed?

Five Questions with Professor Jia Liu

Harvard bioengineer on AI in brain-machine interfaces, and using technology to treat disease.

Home Unaffordable Home

America’s housing problem—and what to do about it

Explore More From Current Issue

Do Ivy League Athletes Outperform in Careers?

How does undergraduate participation in varsity sports enhance career success?