You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

How the Gay Rights Canvassing Study Fell Apart

Thinkstock Images

The public unraveling of one of last year’s highest impact studies, “When contact changes minds: An experiment on transmission of support for gay equality” (Science, December 2014), began the way that many posts for the science watchdog blog Retraction Watch do: Its editor, Ivan Oransky, was mentioned in a tweet. "That @sciencemagazine study on contact w gay ppl changing minds abt gay marriage? It was faked... attn @ivanoransky." 

The person who tweeted at him, Lila Guterman, is a deputy managing editor at Science News. Around 3:30 Wednesday morning, she saw a tweet about a paper from Stanford called “Irregularities in LaCour.” (LaCour being Michael LaCour, the primary coauthor of the retracted study—the guy, in other words, who did the data collection/fabrication.)

“Basically I think it was someone who knew someone who knew someone who knew an author of the Stanford paper—and posted it,” Guterman told me Thursday afternoon. “Social media is great for finding tips like this but it was accidental in this case.” Guterman, familiar with Oransky’s work and the McArthur-funded blog, tweeted at him at 3:37 am.

She received a message from Oransky on Twitter three hours and ten minutes later. “Wow thanks! This is legit right?” At 7:07 am, he messaged her again, saying that he’d confirmed the paper’s flaws with Donald Green, the senior co-author of the paper. By 7:09 am, Oransky had a post up on Retraction Watch. It chronicled how a team of three researchers determined that all of LaCour's unbelievable data was likely just that. The post also told the details of Green, a professor at Columbia, confronting his junior coauthor Michael LaCour, a graduate student at UCLA.

It's critical to note how quickly this has all come down. In the day-and-a-half since this story broke, it has become clear that LaCour likely faked the data used in the study. Retractions are rare and usually take more than five months to be discovered. According to Oransky, only about .02 percent of articles are retracted each year—600 out of the approximately three million published per year (in recent years). 

In the study, real canvassers went out into the field and tried to change people’s minds about gay marriage. The data quantifying the impact they had—what the study was intended to measure, with all its real-world implications—LaCour faked. To many, including those from the Los Angeles LGBT center working with LaCour on data collection, the results seemed too good to be true. Indeed, other studies have demonstrated that trying to change a person’s belief can often accomplish the opposite: rooting them more firmly in what they have always believed. Though that is not to say that the results LaCour obtained are impossible to demonstrate honestly. In fact, had he recorded and processed the data produced in the field by the canvassers working with him, it's possible that he might still have reached a similar conclusion.

Everything unraveled when the authors of “Irregularities in LaCour”—Broockman, Kalla, and Aronow—took an interest in LaCour and Green’s study and tried to replicate it. After they failed to achieve the desired results, they began to suspect that the data couldn’t have come from LaCour’s research—there were too many irregularities. The data seemed as if it came from another study entirely.

According to Broockman, Kalla, and Aronow’s paper, on May 16th, “Broockman suspect[ed] the CCAP data … form[ed] the source distribution and Kalla [found] the CCAP data in the AJPS replication archive for an unrelated paper.” Green explained it like this: “It’s very difficult to fabricate data from survey materials. If you’re going to do it, you have to draw on an existing, real data set,” he said. “The three researchers found the real survey—the one this was created to emulate—from tens of thousands of data sets. They found the needle in the haystack.”

That same day, Broockman, Kalla, and Aronow reached out to Green to let him know what they’d found. Within a day, Green agreed with them. He would retract the paper unless LaCour could provide “countervailing evidence”. (Broockman, Kalla, and Aronow declined to discuss the timeline of their research in further detail. LaCour himself did not reply to requests for comment.)

By Tuesday, May 19, LaCour had claimed to have lost the data; Green decided that if he didn’t hear from LaCour with a retraction by that evening, he would send a retraction notice to Science himself. According to Green, LaCour equivocated. And so: “I sent off my retraction, and I went to sleep and I woke up in the morning at 5:30 and there was a lot of email,” Green said. He figured that Broockman, Kalla, and Aronow posted their paper around the same time that he submitted his retraction. By 3:30 the next morning, May 20, Lila Guterman had stumbled upon it. The rest is rapidly moving history. 

When the study was published last year, it received a lot of attention from many major news outlets—now, just as many organizations are “playing catch up,” as Guterman put it, to report on its retraction. “One wonders if there are still people operating on the assumptions propagated by the first article,” Green said, echoing many people’s concerns. If word of the study's false result spread farther than word of its retraction, the implications could be serious, especially for canvassing groups trying to apply the study's conclusions.

The potential for virality is a fundamental characteristic of how information moves across the internet. Oransky brought up why, in this case, that might be a good thing: “The very same tools that give fraudsters the ability to commit fraud are the [ones] that give scientists and people like us [the ability] to uncover that fraud.”