Foreseeing Self-Harm

Illustration by Davide Bonazzi

Psychology professor Matthew Nock has spent his career studying self-harm, but he remains humbled by how little is yet understood about why people kill themselves. Suicide is the tenth highest cause of death in the United States, and the rate remained roughly steady across the population for the last century, before rising somewhat during the last few decades.

Academic theories of suicide emerged in the nineteenth century. Émile Durkheim wrote about social determinants of suicide in his foundational (though now controversial) text on the differences in suicide rates among Protestants and Catholics in Europe. Freud thought depression and suicide reflected inwardly directed anger. As psychology became the domain of empirical research, clinicians came to rely on factors correlated with suicide—like depression, poor impulse control, or substance abuse—to determine whether a patient was at risk. But a recent review of several hundred studies of suicidal thoughts and behaviors during the last 50 years, co-authored by Nock and a team of fellow scholars in the Psychological Bulletin, finds that risk factors have been virtually no better than random guesses at predicting suicide.

One shortcoming of traditional risk factors is that they require clinicians to rely on self-reported information from patients. What if patients aren’t forthcoming because they don’t want to be hospitalized, or are unable to report their emotional states? The bigger problem, Nock explains, is that each factor individually contributes so little to suicide risk. Depression, for example, may be correlated with suicide, but the proportion of patients with depression who attempt suicide is still vanishingly small. The clinical human brain, Nock continues, “isn’t well prepared to assess dozens of risk factors at a time, weigh them all, and then combine those weights into one probability that a person is going to attempt suicide. So clinicians will focus on one or two risk factors, or they’ll ask a patient, ‘Are you thinking about hurting yourself?’ and just rely on that and forget all the risk factors.”

The predictive failure of individual risk factors may be linked with psychologist Thomas Joiner’s theory of suicide. He has argued that suicide risk depends not just on the will to die, but also on an additional “acquired capability” to kill oneself: the ability to overcome the fear of death through previous experiences of one’s own or another’s trauma, or intentional self-harm.

To comb through the many factors contributing to suicide risk in a more systematic way, Nock (profiled in “A Tragedy and a Mystery,” January-February 2011, page 32) and colleagues have been working on a new approach that uses a computer algorithm. Last September, they published the results of an early algorithm, not yet ready for clinical use, developed using health records from the Partners Healthcare system. The program scanned 1.72 million electronic medical records for every medical code—age, sex, number of doctor’s visits, and each illness or health complaint—that might predict suicide. The resulting model predicted 45 percent of the actual suicide attempts, on average three to four years in advance.

“Not surprisingly, codes like depression and substance abuse have big effects,” Nock reports, but so do “non-mental-health codes related to gastrointestinal problems, getting cuts, and accidentally taking too much of a certain drug. What we may be picking up,” he suggests, “is a sort of practice to make a suicide attempt, or low-level non-lethal suicide attempts that don’t get coded as such.”

The catch, though, is that the model also picked up a lot of false positives: the vast majority of those identified as at risk for suicide did not make a suicide attempt. And even though the algorithm’s success rate was high relative to existing methods, it still failed to predict the majority of attempts. In the future, the paper suggests, the model might be improved by accounting for the compounding effects of multiple factors. How might drug addiction, for example, affect suicide risk differently when coupled with a patient’s age or mental health? And some element of suicide risk might simply be random, he admits, or not captured by medical codes.

Nock doesn’t claim that a computer algorithm can replace in-person treatment, but the model represents one of the many ways in which medicine is being transformed by machine learning. Algorithms, this work suggests, can be applied not just to diagnose suicide risk but also in other clinical domains where human judgment fails.

Read more articles by: Marina N. Bolotnikova

You might also like

Historic Humor

University Archives to preserve Harvard Lampoon materials

Academia’s Absence from Homelessness

“The lack of dedicated research funding in this area is a major, major problem.”

The Enterprise Research Campus, Part Two

Tishman Speyer signals readiness to pursue approval for second phase of commercial development.  

Most popular

Poise, in Spite of Everything

Nina Skov Jensen ’25, portraitist for collectors and the princess of Denmark. 

Renovating Gund

Renovations on Gund Hall of Harvard Graduate School of Design (GSD) to be completed by next year. 

Claudine Gay in First Post-Presidency Appearance

At Morning Prayers, speaks of resilience and the unknown

More to explore

Exploring Political Tribalism and American Politics

Mina Cikara explores how political tribalism feeds the American bipartisan divide.

Private Equity in Medicine and the Quality of Care

Hundreds of U.S. hospitals are owned by private equity firms—does monetizing medicine affect the quality of care?

Construction on Commercial Enterprise Research Campus in Allston

Construction on Harvard’s commercial enterprise research campus and new theater in Allston