In March 3, at a conference in Atlanta, a 59-year-old needlepoint expert, former missionary, and specialist in pediatric infectious disease named Hannah Gay announced that she’d found a cure for HIV. In the fall of 2010, she’d started treatment on an infected baby girl in Mississippi, putting the newborn on what was envisioned as a lifelong course of antiretroviral drugs. But when the child dropped off those medications some months later (her mother stopped bringing her to the clinic), the virus never reemerged. The disease had gone away. “She’s healthy,” Gay told me. “She’s a delightful little girl.”
The apparent breakthrough made the modest Dr. Gay an instant, international celebrity. ABC News named her and her research colleagues “the rock stars of the medical community”; Time counted her among the 100 most-influential people in the world, right up there with Aung San Suu Kyi and Jay Z. Then, less than two weeks after Gay’s bombshell, more good news arrived: A team in France published data on 14 patients who’d started antiretroviral drugs a few months after they were infected and later—just like the Mississippi baby—went off treatment altogether. Some have now been living drug-free for a decade, with scarcely any evidence that HIV has been leaking out of viral reservoirs to replenish their infections. “The tantalizing and long-sought dream of an AIDS cure,” trumpeted the San Francisco Chronicle, “appears to have arrived.”
But despite all the enthusiasm, the scientific details were surprisingly mundane. Neither “cure” involved the use of any new or untested drugs, nor even a radical rethinking of accepted protocols. What made the treatments special was the timing: Hannah Gay prescribed a three-drug cocktail when the Mississippi baby was just 30 hours old, squirting a first dose of AZT into the child’s mouth as soon as she got the chance. (Most infected infants would not have started intensive drug treatment until they were at least six weeks old.) By providing therapy so quickly and aggressively, she may have beaten back the germs before they squirreled away inside quiescent cells or dug into drug-resistant tissues in the brain. In her infant patient, she found a way to vanquish an infant form of the disease. The data from the group in France, who received a standard set of drugs earlier than most infected grown-ups do, suggests that something similar might be possible for older patients, too.
Doctors once put off prescribing medicine for HIV until their patients were in desperate need, so as to mitigate the toxic side effects and postpone the inexorable development of drug-resistant viral strains. But in more recent years, a consensus has emerged around a protocol summarized as “hit hard and hit early.” Several large-scale studies have shown that attacking HIV infections while they’re fresh can make the virus less contagious and prolong lives. The cases in Mississippi and France provide vivid evidence in support of that approach, suggesting that—in some cases, at least—it can even yield a lasting cure.
There’s a problem with hitting hard and hitting early, though, and one that drug research can never hope to solve: It’s impossible to treat someone when she doesn’t know she’s sick. The infected mother of the Mississippi baby was in the dark about her status, so she never had the chance to get an early dose of antiretroviral drugs herself. Recent stats suggest her case is not unusual. At least 18 percent of HIV-positive Americans don’t realize they’re infected, and in developing nations, that rate is almost three times higher. In many countries, two-thirds of pregnant women give birth without ever getting tested for the disease.
That’s what makes the recent stories of a cure so frustrating. The therapies we have are already good enough to win the war on AIDS, but they can only score that victory if we reach more infected people quickly. To catch cases while they’re still developing, we’ll need a much bigger net—a way to screen for HIV that’s close to universal, and a means of starting treatment right away. It used to be that we needed better drugs. Now we need better diagnoses.
You’d think a widespread screening program for HIV would be an easy sell, but its advocates have been struggling to make their case for more than 25 years. Resistance started early in the epidemic, when doctors had no way of treating the disease and patients faced both intense discrimination and the unhelpful knowledge of their own, certain death.
After the Food and Drug Administration (FDA) approved the first blood test for HIV in 1985, activists worried about its effect on the vulnerable populations for whom it would be most forcefully promoted. Their concerns soon proved justified. The Reagan White House took a hard line on mandatory testing: The president wanted routine screening of immigrants, and his education secretary, William Bennett, proposed the same regimen for hospital patients and prisoners. State legislatures were even more aggressive. In 1987, Illinois made HIV testing a prerequisite for obtaining a marriage license and formally allowed “isolation of a person or quarantine of a place” to prevent the spread of AIDS. The arrival of AZT (a drug with marginal benefits and ghastly side effects) that same year did little to quell the fear that testing could do more harm than good.
Under pressure from patient groups, the public-health hawks eventually retreated and agreed that tests for HIV should be governed by more exacting standards than those in place for other medical procedures. That meant prior written consent, careful pre- and post-test counseling, and the option to receive anonymous results. Such safeguards protected patients, but in the end, they rankled epidemiologists. “Early on, it was clear that the consent process was designed not only to inform people about the benefits of testing and to avoid coercion,” says Columbia University medical historian Ronald Bayer, “but really, in some instances, to discourage people from getting tested, because there was nothing one could do for them.” The urge to put up obstacles to screening was a product of what he calls “AIDS exceptionalism”—the belief that, given its social context, an HIV diagnosis should be governed by a special set of rules.
Those rules have slowly been changing, though. In September 2006, the Centers for Disease Control and Prevention, reversing its position on the issue, suggested universal screening for adolescents and adults, with voluntary opt-out. In 2009, the American College of Physicians endorsed the same system. As of April 2013, even the exceedingly cautious U.S. Preventive Services Task Force (the same one that last year issued the controversial call to end prostate-cancer screening) has concluded “with high certainty that the net benefit of screening for HIV infection ... is substantial.”
Health departments in almost every state have run with the new recommendations. In New York, for example, hospitals and primary-care doctors are now required to offer HIV tests to every patient between the ages of 13 and 64. The rule’s effects so far may be limited—“I am certain it has not been implemented as widely as the law intended,” Bayer says—but its existence marks a significant shift in ideology. So does the fact that, in 2012, the FDA approved the sale of OraQuick, a somewhat less-accurate test for HIV that can be self-administered at home. We’re starting to put aside our squeamishness.
For those at risk of AIDS, the benefits of this change of heart will be enormous. In 2009, a team of researchers at the World Health Organization imagined what would happen if doctors tested every person in South Africa over 15 years old, then put those with HIV on antiretroviral therapy without delay. According to their models, the effort would squeeze the epidemic to a fraction of its present size within a decade.
Universal screening could make a major difference in the United States as well. A group at Stanford has figured that screening all Americans for HIV at least one time in their lives (along with yearly tests for those at higher risk of contracting the disease) could prevent 212,000 new infections over the next 20 years. Such a program would not be cheap, of course, but the researchers estimate that it would buy the equivalent of an extra year of healthy living for every $25,000 spent. That rate of return matches up with those achieved from screening for breast cancer or type 2 diabetes.
Given what we’ve learned about the benefits of early therapy—from Hannah Gay, from her counterparts in France—it’s clear that more effective screening leads to better outcomes all along the line. “We’ve passed the moment where we treat the HIV test as exceptional,” says Bayer. “We should effectively make it routine.”
Daniel Engber is a columnist for Slate.