Christmas came early for the
pharmaceutical industry this year. Last
week, the Senate
followed the House in passing the 21st Century Cures Act. Though this bill has
been lauded by liberals for providing much-needed funds for medical research,
its real impact will be elsewhere. Whereas drug approval traditionally required
the demonstration of real clinical benefit in a randomized clinical trial,
under the Act drug firms will increasingly be able to rely on flimsier forms of
evidence for approval of their therapies (incremental steps in this direction,
it is worth noting, have already occurred). The Act, by reconfiguring the drug
regulatory process, lowers the standards for drug approval—a blessing for drug
makers, but an ill omen for public health.
In the Senate, a grand total of five senators—including Bernie Sanders and Elizabeth Warren—voted against it. The media, meanwhile, has for the most part done a poor job dissecting its actual contents. As a result, few now realize how detrimental the act is likely to be for drug safety, or appreciate the mix of conservative ideology and pharmaceutical industry greed underlying the longstanding campaign that brought it to fruition.
The thinking behind the 21st Century Cures Act—and likeminded proposals—goes something like this: In the twenty-first century, the pharmaceutical industry—driven by the profit-motive—continues to do a fine job innovating new therapies. Far too often, however, they are being held back by risk-averse, slow-moving FDA bureaucrats with outdated standards for approval. “Modernize” the FDA—release the cures! Yet if the law did nothing other than weaken FDA standards, it may not have passed: Liberals understandably embraced the act’s new NIH funding, its mental health provisions, and its support for state anti-opioid programs. For Democrats, it also represented the sort of bipartisan “victory” that shows that all is not gridlock in Washington, after all.
Yet this thinking is flawed on multiple levels: “We need to remember,” as former editor-in-chief of the New England Journal of Medicine Marcia Angell wrote in her 2004 pharmaceutical exposé, The Truth About the Drug Companies, “that much of what we think we know about the pharmaceutical industry is mythology spun by the industry’s immense public relations apparatus.” First among these myths is the notion that the status quo of private sector drug research and development is the best of all worlds. On the contrary, as Angell put it, “me-too” drugs—lucrative, duplicative agents that do not improve on existing therapies—are in fact the “main business of the pharmaceutical industry.” We can’t rely on the profit motive to bring forth new cures, when it’s just as easy for companies to make big profits by redesigning or tweaking drugs that already exist.
Second, the notion of a slow-moving, risk-averse FDA is wrong: If anything, the agency’s drugs review process is sometimes too hasty, while its standards of evidence for approval are frequently too lax. Consider, for instance, two recent studies of new cancer drugs. The first—published a year ago in JAMA Internal Medicine by Chul Kim and Vinay Prasad—looked at cancer drugs approved by the FDA on the basis of “surrogate endpoints” between 2008 and 2012. “Endpoints” is a term for outcomes: Hard clinical endpoints refer to outcomes such as survival, where the benefit to the patient is unambiguous. Surrogate endpoints, however, refer to metrics like the change in the size of a tumor on a CT scan. Though a shrinking tumor logically sounds like a good outcome, it is only meaningful if it actually translates into an improvement that an individual actually experiences, like a longer life or a better life. Often, however, that’s not the case: New therapies can change numbers without improving our actual health. This is what Kim and Prasad found: Of the 36 drugs approved on the basis of surrogate endpoints, at least half had no demonstrated benefit.
Perhaps they had other benefits? Or perhaps not. In late November, Tracy Rupp and Diana Zuckerman in the same journal examined these 18 drugs, and found that not only did they not improve survival, but only one had evidence that it improved quality of life (the others lacked data or had no effect, negative effects, or mixed effects). Despite this lack of benefit for either the quantity or quality of life, they note, the FDA withdrew approval for only one drug. Those drugs that either didn’t improve or actually worsened quality of life continue to be sold at an average price of $87,922 per year. Not a bad return for a basically useless drug.
How has this state of affairs come about? At least in part because, as scholar Aaron Kesselheim and colleagues describe in a 2015 study in the British Medical Journal, a total of five new “designations” and one new pathway (“accelerated approval”) have been created since 1983 to lubricate the drug approval process. As they find in their study, as of 2014 some two thirds of drugs are now being reviewed through one or more of these expedited programs, which sometimes allow them be approved more quickly, in some instances with skimpier evidence.
The 21st Century Cures Act will only take us further down this road. Indeed, as Trudy Lieberman has written at Health News Review, the bill is best seen as the “culmination of a 20-year drive by conservative think tanks and the drug industry that began during the Clinton Administration to ‘modernize’ the FDA.” PhRMA—the industry’s primary lobbying group—alone spent $24.7 million on Cures Act-related lobbying, according to data assembled by the Center for Responsive Politics and reported by Kaiser Health News. No less important, however, are the industry’s generous campaign contributions, which have helped construct a compliant and conducive political climate in Washington over the years.
The act reverses many of the protections that stemmed from the 1962 Kefauver–Harris Amendments, signed by John F. Kennedy, which bolstered the Food and Drug Administration’s (FDA) regulatory powers: These reforms meant the FDA could require proof not just that a drug was safe, but that it actually worked, prior to approval.
The 1962 legislation was in part a response to the devastating effects of thalidomide during the late 1950s and early 1960s. Prescribed both for morning sickness and sleeplessness, thalidomide became infamous for the epidemic of birth deformities it unleashed throughout Europe. The drug was never approved in the United States, thanks to one FDA reviewer, Frances Oldham Kelsey, who boldly resisted pressure from the firm trying to market it. The epidemic helped lead to a process of reform, eventually leading to the three-phase system of clinical trials that became the bedrock of drug approval for the FDA.
Not everybody was happy. In a 1973 Newsweek column about the new FDA (not-so-cleverly titled “Frustrating Drug Advancement”), economist Milton Friedman asserted that, by withholding drugs of unproven but theoretical efficacy from the public, “FDA officials must condemn innocent people to death.” Indeed, Friedman predicted that future research might support the “shocking conclusion that the FDA itself should be abolished,” a move he endorsed. He believed that government regulation was unnecessary: In a free market, the most effective drugs would sell, and the manufacturers of harmful or ineffective drugs could be discouraged, and punished if necessary, through litigation.
Few today would agree with Friedman’s libertarian stance on this issue (though Trump’s potential pick to head the FDA comes close), but the kernel of his point—that government bureaucrats are denying life-saving medications from the public—has suffused the national discourse, with grave consequences. The concept of “cures deferred” has become a weapon for those who want to “modernize” the FDA, by which they mean to make it easier for drug firms to get lucrative drugs approved, regardless of the drugs’ effectiveness. With the 21st Century Cures Act, they risk taking us back to the era before Kefauver–Harris, or perhaps even earlier, to an age when neither patients nor doctors knew which drugs worked, which did not, and which—for that matter—were deadly.
The 21st Century Cures Act allows “the potential use of real world evidence” to support the approval of a novel indication for an approved drug and “to help to support or satisfy postapproval study requirements.” That sounds rather reasonable: who could possibly oppose the use of “real world evidence”? The problem, however, is that “real world evidence” actually means here “really bad evidence.” The law defines “real world evidence” as “data… derived from sources other than randomized clinical trials,” which is to say uncontrolled observational data.
This is perhaps the bill’s most dangerous clause. Randomized clinical trials are—in the vast majority of cases—the only way to know if a drug actually works, as observational studies can be notoriously unreliable in determining the efficacy of drugs. In what is perhaps the most famous instance of this, until the early 2000s postmenopausal women were prescribed hormone replacement therapy (HRT) with the understanding that the pills likely reduced cardiovascular disease, a finding seen in large observational studies performed in the 1990s. Eventually, however, a randomized trial was performed, demonstrating that, on the contrary, HRT was giving these women heart attacks and strokes.
Randomized clinical trials are not perfect, especially when they are conducted by pharmaceutical companies with a vested interest in distorting trial methodology (or hiding trials results when they don’t go as planned). But they are the best we have when it comes to assessing drug efficacy. With this new provision, however, randomized clinical trials can—in some instances—be skirted altogether, leaving both doctors and patients forever in the dark as to a drug’s actual therapeutic utility for a particular indication. Their profitability, in contrast, will be rather more clear.
The effects of the therapy approval provisions in the Act—and there are many other bad ones, including new programs that allow drug firms to get their medical devices, antibiotics, or so-called “regenerative advanced therapies” approved quicker and/or on the basis of weaker evidence—will be insidious and long lasting. They will “unravel the FDA,” as physicians Reshma Ramachandran and Zackary Berger of Johns Hopkins write in STAT, “turning it from the treatment watchdog it is today into a puppet of the pharmaceutical and medical device industry.”
Although Friedman wanted to abolish the FDA, the industry would be wise to realize that a puppet agency is better for its interests than no agency at all. For the FDA legitimizes the industry’s products with the public stamp of scientific authority: Rather than end it, it is far smarter to capture it.