Prescription for Error?
In recent years, safety recalls of widely prescribed drugs like the pain-killer Vioxx have sent an unsettling message to consumers: Today’s super cure may be tomorrow’s health hazard. Many drug-industry critics believe the expanding financial influence of “big pharma” has compromised federal oversight of new medicines, allowing unsafe drugs to reach the market and remain there for months or even years. At the same time, patient groups and pharmaceutical investors continue to blame the Food and Drug Administration (FDA) for slowing innovation through bureaucratic inefficiency and over-regulation.
According to Freed professor of government Daniel Carpenter, an expert on the history of the FDA, the flaws in the U.S.’s drug-review system do not stem from a single culprit—whether profit-driven manufacturer or inept government regulator. Rather, they are rooted in specific legislation intended to balance the cost of ensuring public safety with the need for prompt approval of beneficial medicines. In 1992, Congress passed the Prescription Drug User Fee Act (PDUFA), a payment program under which pharmaceutical producers pay a “user fee” to the FDA to help cover the cost of reviewing new drugs, and the FDA, in exchange, rules on applications within a set period of time—12 months for “standard” reviews and six months for “priority” reviews. Congress revised the bill in 1997, shortening the “standard review” time to 10 months, while maintaining a 6-month deadline for “priority” applications. When the law passed, its detractors charged that it created a dangerous conflict of interest within the drug-review process by making the FDA overly dependent on pharmaceutical companies for funds. But no study has effectively tracked the law’s impact on drug safety until now.
Carpenter and his coauthors, professor of medicine Jerry Avorn and medical student Evan James Zucker, set out to investigate whether the introduction of drug-approval deadlines has had an adverse effect on the rate of pharmaceutical safety problems. Using a data set showing the approval times for all “new molecular entities” reviewed by the FDA between 1950 and 2004, they looked for changes in the pattern of approval timing before and after enactment of the 1992 legislation. Then they compared drug-approval times with records of post-approval safety problems, identified by the withdrawal of a drug from the market, the addition of a “black box warning” (the severest form of labeling for a drug’s potentially adverse side effects), or the removal of one or more dosage forms (often a first step in a drug’s “quiet exit” from the market).
Their findings, published in The New England Journal of Medicine this past spring, exposed some disturbing correlations. Since passage of the 1992 legislation, approval times have tended to cluster in the two-month period immediately preceding the congressionally stipulated deadlines—a configuration that does not appear in the four-decade period prior to the 1992 legislation. Between 1993 and 2004, a new drug was 3.4 times as likely to be approved in the two months before the deadline as at any other time in the review cycle; and it was 2.7 times as likely to be approved in the two months before deadline as it was in the two months afterward.
This “just in time” approval trend corresponded to an increased rate of post-marketing safety problems. Drugs approved during the two-month “pile up” period were three times more likely to be pulled from the market than drugs approved at other times in the review cycle, twice as likely to have one or more dosage forms discontinued, and two to seven times more likely to receive a “black box warning.” The researchers were careful to rule out other factors that might explain the pattern. For instance, they found that “new molecular entities” approved in the immediate pre-deadline period were not inherently high risk: i.e., no more likely to have undergone a pre-marketing advisory review, to be “first in class,” or to be associated with urgent consumer demand (measured by high hospitalization rates for the drug’s primary indication) than drugs approved in earlier or later (i.e. post-deadline) months.
Carpenter and his colleagues believe the study results reflect the negative impact of the user-fee law on FDA decision-making. In previous work, Carpenter and doctoral candidate in government Justin Grimmer developed a mathematical model that predicts the impact of deadline penalties on organizational behavior and outcomes. “We showed that the relationship between the size of the penalty and the probability of an error is non-linear,” Carpenter explains. “If you double the deadline penalty, you could quadruple, or more, the size of the error.”
In the FDA’s case, he says, the primary consequence of missing any one drug application deadline is not monetary, but rather reputational. “The problem is that the FDA is going to get criticized,” Carpenter explains. “Once deadlines come up, all sorts of interested parties start to weigh in, such as investors, financial analysts, and patient advocacy groups.” Federal drug reviewers are constrained by the need to avoid external criticism, even as they vie for internal status. “This wasn’t Congress’s intent,” he says. “But reputation turns out to be a very big incentive for agencies.”
The solution, Carpenter concludes, is for legislators to focus on the quality of the approval process, rather than exclusively on its speed. Only by increasing funds for necessary FDA staff, he says, will Congress ensure that the agency is able to fulfill its public obligations in a timely manner. “We all work under deadlines,” he adds. “The question is how much we should rely on deadlines versus other mechanisms to improve and accelerate review.”