For millennia, people experienced angina pectoris and heart attacks, but it wasn’t until the 1910s and 1920s that physicians began concerted efforts to discover their biological causes. During the twentieth century, heart disease began to climb from relative obscurity to its now longstanding status as the leading cause of death for American adults. It has held that position every year except those between 1918 and 1920, when it yielded to the influenza pandemic. Yet cardiac care in 2013 is dramatically more advanced than it was in 1910—isn’t it?
In his new book, Broken Hearts: The Tangled History of Cardiac Care (Johns Hopkins), David S. Jones ’92, M.D. ’97, Ph.D. ’01, Ackerman professor of the culture of medicine, narrates the history of two of American medicine’s highest-profile treatments for heart disease: coronary artery bypass grafts and angioplasty. Each intervention, promising lifesaving relief, was embraced with enthusiasm by cardiologists and cardiac surgeons—and both techniques often do provide rapid, dramatic reduction of the alarming pain associated with angina. Yet, as Jones painstakingly explains, it took years to show whether the procedures prolonged lives; in both cases, subsequent research deflated those early hopes. The interventions—major procedures, with potentially significant side effects—provided little or no improvement in survival rates over standard medical and lifestyle treatment except in the very sickest patients. From his detailed study, Jones draws broader conclusions about the culture and practice of modern medicine.
“Doctors generate better knowledge of efficacy than of risk, and this skews decisionmaking,” he says. “They design treatments to do something specific, and design studies to see if those treatments achieved those outcomes; and so accumulate lots of data on whether treatments produce the desired effects. Capturing good knowledge of side effects, especially the unanticipated ones that are so common, is both less interesting and more difficult. Whenever doctors have more thorough knowledge of the possible benefits of a treatment than they do of its potential risks, patients and doctors will lean towards intervention.”
Within cardiac care, examples of medical intervention include surgical procedures, such as coronary bypass operations, and invasive treatments like angioplasty. Coronary bypass has the longer history, traceable to 1910, when one surgeon made an (unsuccessful) attempt to perform bypass surgery on a dog. But it wasn’t until 1968 that Rene Favaloro of the Cleveland Clinic described his success with human coronary artery bypass surgery: he grafted a vein taken from the patient’s leg into the heart’s vascular system to replace a blocked coronary artery. Favaloro’s report captured the imagination of many surgeons. Initially they operated on stable patients with modest coronary artery disease. Within a few years, however, as surgeons became more adept at slipping new veins into the heart vasculature, they operated on ever-sicker patients, and even dared to operate during heart attacks. The holy grail soon became clear: act preemptively and operate before a heart attack occurs. By 1977 cardiac surgeons were performing 100,000 bypass procedures per year; the operation’s popularity peaked at 600,000 instances in 1996. Since then, patients like Bill Clinton and David Letterman have kept the procedure in the limelight.
Yet there was a fly in the ointment. The first randomized clinical trial of bypass surgery’s efficacy, using data from a collaboration of Veterans Administration hospitals, was not published until 1977. Such trials were then becoming the gold standard of medical research (and still are). “Surgeons said trials were totally unnecessary, as the logic of the procedure was self-evident,” says Jones. “You have a plugged vessel, you bypass the plug, you fix the problem, end of story.” But the 1977 paper showed no survival benefit in most patients who had undergone bypass surgery, as compared with others who’d received conservative treatment with medication. “There was a firestorm of controversy,” Jones says. “There was lots of money, institutional power, and lots of lives at stake. The surgeons dismissed the trial for technical reasons. So, many other trials were done, all more or less showing the same thing: bypass surgery improved survival for a few patients with the most severe forms of coronary artery disease, but for most others it relieved symptoms but did not extend lives.” The results raise a philosophical question of the goal of medical treatment: alleviating symptoms or lengthening lives? “How much is it worth investing in a surgical procedure, with all its risks,” he asks, “if all you’re doing is relieving symptoms?”
The advent of angioplasty in the 1980s complicates the story. With angioplasty, instead of bypassing the plugged artery, “you use a balloon to compress the plug,” Jones explains, “and (as it’s done today) you leave a stent behind to keep the blood vessel open, and so restore blood flow to the heart.” Like bypass surgery, angioplasty went from zero to 100,000 procedures annually with no clinical trial to assess long-term outcomes—based on the logic of the procedure and patients’ reports of how much better they felt. Yet the first clinical trials, which appeared in the early 1990s, showed no survival benefit of elective angioplasty as compared with medication.
Moreover, because such trials assess patients’ outcomes several years after their treatments, they often end up reporting the results of outdated procedures. “A clinical trial on angioplasty published in 1992 might study a group of patients who had the procedure in 1985,” says Jones. “But angioplasty has been refined since 1985. So you start another trial in 1992 and publish in 1998; then, the cardiologists say, ‘Now we have fancy stents, not those old-fashioned stents they used in 1992.’ And so on. As long as you continue to innovate in a way that, at face value, looks to be an improvement, the believers can always step out from under the weight of negative clinical experience by saying that the research necessarily applies to an earlier state of medical technology.”
Furthermore, “patients are wildly enthusiastic about these treatments,” he says. “There’ve been focus groups with prospective patients who have stunningly exaggerated expectations of efficacy. Some believed that angioplasty would extend their life expectancy by 10 years! Angioplasty can save the lives of heart-attack patients. But for patients with stable coronary disease, who comprise a large share of angioplasty patients? It has not been shown to extend life expectancy by a day, let alone 10 years—and it’s done a million times a year in this country.” Jones adds wryly, “If anyone does come up with a treatment that can extend anyone’s life expectancy by 10 years, let me know where I can invest.”
“The gap between what patients and doctors expect from these procedures, and the benefit that they actually provide, shows the profound impact of a certain kind of mechanical logic in medicine,” he explains. “Even though doctors value randomized clinical trials and evidence-based medicine, they are powerfully influenced by ideas about how diseases and treatments work. If doctors think a treatment should work, they come to believe that it does work, even when the clinical evidence isn’t there.”
Though he concentrated in history and science (and fenced for the varsity team) in college, Jones dutifully fulfilled the undergraduate requirements for attending medical school. He focused on geology, however, and wrote his honors thesis on Mount Vesuvius. “I knew there were courses in the history of medicine,” he recalls, “which I avoided like the plague.”
But he took a small history of medicine class in his first year of medical school, and became a research assistant on a project that involved three million cubic feet of documents (freshly declassified by President Bill Clinton) about the testing of plutonium on unsuspecting patients to assess the toxic effects of radiation. That study raised important questions about the cultural and ethical environment of science. After Eileen Welsome’s 1999 book The Plutonium Files (based on her Pulitzer Prize-winning newspaper series) chronicled 50 years of clandestine experiments, Jones says he saw clearly the inevitable and revealing bond between science and society: “It was a great way to convince myself that the work of a medical historian is important and significant.”
He was moved to pursue a Ph.D. in the history of science to complement his medical degree. Jones’s historical eye allowed him to view medicine through a slightly different filter than his peers, perhaps suggesting a more critical view of why doctors do what they do. His early research parsed the epidemics that decimated the American Indians, an analysis that he expounded in his 2004 book Rationalizing Epidemics: Meanings and Uses of American Indian Mortality Since 1600. Analyzing the cycle of diseases that devastated the Native American population, from smallpox to tuberculosis to today’s chronic ailments of obesity, diabetes, and heart disease, Jones argued that rather than simply reflecting differences in immune tolerances to certain pathogens and lifestyles, the epidemics also grew from a web of complex social forces—including forced migration, the changing economic circumstances of displaced native populations, and cultural practices that gave the diseases deadlier power among American Indians.
In Broken Hearts, Jones describes the historic methodological struggles within the medical profession as doctors tried to identify the causes of heart attacks. Beginning in the early 1900s, when the first cases of heart attack were identified in the medical literature, physicians have struggled to explain why hearts fail so suddenly. Understanding the why can in turn reveal how heart attacks occur. Doctors hoped that would lead to the most effective ways to fix the problem. But one of the dirty secrets of cardiac care, says Jones, is that until the 1970s, heart experts could not agree on what was causing heart attacks, rendering their interventions equal parts gamble and trial-by-doing.
Starting with the earliesttheory behind what triggers heart failure, every major advance in cardiac treatment, Jones says, mirrors the prevailing views of where the disease comes from. Current theories hold that heart attacks are caused by the buildup of atherosclerotic plaques from high-fat diets and sedentary lifestyles. “But if we went back to Boston 100 or 120 years ago, heart disease was less prevalent, and it was different,” says Jones. “There were people with atherosclerotic coronary artery disease, but the more prevalent forms were syphilitic and rheumatic heart disease, something that reflects the higher prevalence of infectious diseases at the time.”
As reports of heart attacks began to populate the medical literature, competing theories about their cause emerged. The sudden nature of the attacks led some doctors to assume that random spasms of coronary arteries might be responsible. Without any window into the living heart, however, this theory was supported only by evidence of similar twitches in the blood vessels found in rabbits’ ears, for example, and by the fact that not all heart-attack patients showed signs of lesions or clots in their cardiac blood vessels on autopsy. Another popular theory involved clots obstructing blood flow to critical coronary arteries; this fueled the appearance of blood-thinning agents such as heparin as a common treatment for heart-attack patients by the 1950s.
But an equally compelling theory was also emerging, one that had actually been described in an autopsy report back in 1844 that mentioned “several atheromatous lesions, of which a rather significant one was ulcerated and the atheromatous mass extruded into the arterial lumen.” Published in a seldom-read source—the Journal of the Danish Medical Association—the account received little attention, but in the 1930s, the medical examiner of Boston made similar observations and developed the theory of plaque rupture. Heart attacks, according to his idea, happened when atherosclerotic plaques, embedded in the coronary arteries, ruptured and triggered blood clots (thromboses) that blocked blood flow. Confirmation came in the 1960s when pathologists painstakingly sliced and analyzed coronary artery specimens from patients who died of heart attacks: fatal coronary thromboses were nearly always associated with ruptured plaques.
Although initially assumed to be an affliction of the wealthy elite, by the 1930s and 1940s, heart disease was increasingly recognized among men of all social and economic strata. “This led to a new concern: if someone was working on the assembly line and doing physical labor and had a heart attack, he would be eligible for workers’ compensation,” says Jones. As employers wrestled with finding the right balance of financial responsibility that would not leave them bankrupt, many responded by shifting accountability back to the workers, exempting heart attacks from workers’ compensation to free themselves from a potentially enormous financial burden.
The prevailing cardiac treatment remained weeks of bed rest, along with an admonition to avoid aggravating or exciting circumstances that would provoke a spasm, or a clot, or a plaque to rupture and trigger a sudden heart attack. “That treatment only suited people who could afford weeks and weeks of bed rest,” says Jones. “The changing recognition of heart disease—that eventually all humans may get it—led to changing sets of responses to the disease and the need for different kinds of treatments.”
Epidemiological surveys like the groundbreaking Framingham Heart Study(a lifestyle study of 5,209 middle-aged residents of Framingham, Massachusetts, begun in 1948 to identify the risk factors for heart disease), began to connect factors like a low-fat diet, exercise, and avoiding smoking to a lower risk of heart attack. But physicians knew that such behavioral changes would challenge their patients. They turned away from prevention and toward treatment: if blocked pipes were the problem, then bypassing the blockage would solve it. “Of course,” says Jones, “treatments for heart disease also generate revenue dwarfing that produced by preventive care.”
But, as Jones says, heart-bypass surgery was a classic case of “learning by doing.” Only as more patients went under the knife could doctors know for sure whether such interventions were actually making a difference in their lives. Much of what justified the first surgeries relied on the assumption that obstructions in heart vessels needed to be cleared; the evidence for this theory rested on autopsy data and animal models of the disease—neither of which, most physicians will agree, are ideal substitutes for the human body. Indeed, Jones, says, the seductive logic behind the procedure may have blinded doctors to some serious questions about its safety and efficacy.
The specter of neurological complications from the surgery—which required the use of a heart-lung machine to maintain flow of oxygenated blood to the brain and body while the heart is stopped during the procedure—started to shadow the field. Early in the history of open-heart surgery, cardiac surgeons recognized the possibility of brain damage in patients. But as coronary artery bypass became more widespread and standardized, the benefits, they felt, sufficiently outweighed the risk of memory and cognitive problems (which studies have estimated at anywhere from 10 percent to 50 percent) that they generally omitted it, even in major publications in reputable journals. “It’s maddening,” says Jones. “I followed the clinical trials in the New England Journal of Medicine that were published on bypass surgery. In 1996 they published a huge study on the cerebral complications of bypass. But in the 20 or so clinical trials involving bypass published since then, how often did they include data on neurological outcomes? Only half make more than a passing mention.”
The reason, he argues, is the bias toward intervention that accompanies most new medical treatments. Both doctors and patients evaluate such innovations by asking if there is a chance they will help. “The truth is, there is almost always a chance something will help; there are very few treatments in which there is zero chance that it will help,” says Jones. “Is there a chance that mastectomy will decrease a woman’s risk of dying of breast cancer? Sure there is. Should we do a mastectomy on all young women, because there is a chance it will help them avoid breast cancer? Of course not; we have to figure out when it is appropriate.”
Angioplasty emerged on the heels of bypass surgery when a German cardiologist, Andreas Grüntzig, devised a way to thread a catheter from a groin artery into the heart in 1977. Initially, doctors performed angioplasty on patients with stable coronary artery disease; cardiologists were cautious about how useful angioplasty alone would be as a treatment for heart-attack patients. Grüntzig predicted it could substitute for bypass for at most 15 percent of patients who were candidates for surgery.
But it didn’t take long for cardiologists to begin seeing themselves, as a profession, in competition with cardiac surgeons over treating heart patients. Surgery to treat heart attacks was becoming a booming business, pulling in millions in revenue at a typical cost of $10,000 to $15,000 per procedure. Buoyed by the emerging data from the late 1970s onward showing that bypass surgery did not necessarily confer any survival benefit, cardiologists focused on the advantages of angioplasty over surgery: no operation to open the chest, only a small incision in the groin, and a faster recovery time. These benefits were concrete and immediately evident, but cardiologists didn’t know whether angioplasty would improve outcomes to a degree comparable to bypass surgery. Still, the intuitive sense of angioplasty’s lower risk catalyzed a growth spurt from 133,000 procedures in 1986 to more than a million performed annually by the 2000s, forming (together with bypass surgery) a $100-billion industry today.
The problem with balloon angioplasty was re-stenosis: the plaques would re-form within a few weeks of the procedure. But given the visual evidence of plaque, the belief that dealing with it had to translate into some health benefit drove another innovation in angioplasty: the development of stents designed not only to temporarily compress plaques in partially blocked arteries but to prop them open more persistently with mesh-like devices that acted like scaffolding for the vessels. In theory, stenting would prevent re-stenosis.
When faced with evidence that the placement of a foreign object in the vessel walls could itself promote thrombosis, and with recalls of stents that sometimes snapped shut, cardiologists and device-makers simply turned to more flexible materials and laced the stents with drugs that resisted buildup of thrombus. In 2007, a study of more than 2,000 patients with stable coronary disease showed that compared to drug therapy alone, stents in combination with drug therapy such as blood-pressure medications and cholesterol-lowering agents did not lower the risk of having a heart attack or improve survival during a seven-year follow-up period. But instead of curbing stent use, two years later, a survey showed that the share of patients receiving drug therapy merely as a first-line treatment, before getting stents, remained unchanged at 44 to 45 percent.
Jones argues that the predominant explanation of what causes heart attacks—obstructions in the coronary vessels that need to be cleared—is primarily to blame, because it leads to an erroneous emphasis on the highly visible plaques looming on angiogram screens. In fact, these plaques are not heart attacks-in-waiting; smaller, often invisible lesions in the heart vessels are now understood to cause most heart attacks. The problem isn’t so much that bypass surgery or angioplasty or stents aren’t working, Jones explains, but that in some cases, the interventions target the wrong lesions. “Instead of trying to stent every possible lesion, we need to realize that there are certain risks—small plaques—and that we cannot manage them all with stents or bypass. We need interventions, especially lifestyle changes or medications, that address the causes of atherosclerosis, and not just the largest plaques. And we need to accept that there are some large plaques that might not need intervention. What we really need to do, if we want to change the way we make decisions about these procedures, is to change both the culture among physicians and the culture among patients so that they accept a slight increase in risk tolerance.”
Consider, for example, breast and prostate cancer. After doctors and health officials convinced the public that routine screening is the most effective way to detect tumors early, mammograms and prostate-cancer tests became mainstays of routine physical exams. But the U.S. Preventive Services Task Force recently conducted evidence-based reviews of the benefits of such screenings, assessing lives saved against the risks of complications and false positives that the screenings generate. For women under 50, the panel concluded, the risks of unnecessary biopsies and potential infections caused by yearly mammograms outweigh the benefits of the procedure; the panel recommended that women begin screening not at 40 as previously recommended, but at 50. Similarly, an analysis of the prostate specific antigen (PSA) blood test to detect early signs of prostate cancer did not show a significant survival advantage, and the task force issued the seemingly stunning recommendation that no men, unless they have a history of prostate cancer, be screened with the PSA test.
Advocates and patients immediately criticized the guidelines, citing the inevitable deaths that would occur as people eschewed screening and visited their doctors only when treatment could do little to halt the disease. The American Urological Association continues to push for regular PSA screening, a position that many patients support as well, given the intuitive belief that action is better than inaction.
Jones has some personal experience with such life-and-death decisionmaking. Six years ago, at 37, he was diagnosed with a very rare form of stomach cancer and had a tumor surgically removed. “Mine was cancer therapy as it existed in the 1890s: find a tumor, cut it out, and hope for the best,” he writes in the preface to Broken Hearts. Yet his aftercare was fully twenty-first century, involving frequent Positron Emission Tomography (PET) scans to monitor his condition. Jones has remained cancer-free since then and no longer receives PET scans, as his doctor feels they aren’t needed; he told Jones, “If you were 70 years old, we’d do scans every six months, but if we were to start doing that to you now, you’d die of radiation-induced leukemia before you’re 60.”
So Jones has to tolerate the uncertainty of knowing that the cancer may have recurred, and that an imaging test might reveal it: “You have to live with this uncertainty—you can’t get a PET scan each morning.” Similarly, when his first PET scan disclosed some nodules at the base of one of his lungs that “shouldn’t be there,” his doctor offered a lung biopsy but recommended against it. He said, “If you’re willing to ignore them, I’m willing to ignore them. So we did.” Jones explains that “it’s important not to do everything that could be done. I say this not only as an academic, but as someone in the trenches, a patient experiencing the culture of medicine and having to face my own medical decisions.”
The reassessment of risks, such as those of the Preventive Services Task Force, Jones explains, may ultimately help to frame treatment decisions in more realistic, and evidence-based, terms. Understanding, for example, that not all plaques in the heart may need to be removed (studies show that operating or stenting to address stable plaques may not yield longer lives or fewer symptoms) may also prompt more judicious and appropriate use of therapies. “Doctors will have to teach patients a new attitude toward abnormal findings on lab tests and x-rays—that some are okay and don’t require intervention in every case,” he says. “That would be a major shift in the culture of medicine.”