You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
MAGA-DATA?

“Red Wave” Redux: Are GOP Polls Rigging the Averages in Trump’s Favor?

Polling by right-leaning firms has exploded this cycle. Maybe they want to be accurate—or maybe they’re trying to create a sense of momentum for Donald Trump.

Donald Trump grimaces
Win McNamee/Getty Images
Former President Donald Trump in Detroit on October 18

Last month, a GOP-friendly polling firm presented itself, and its data, in a highly unusual way. Rather than maintain a nominally neutral public-facing profile, this pollster acted more like a cavalry brigade for Donald Trump’s campaign. And the firm did so explicitly, openly, and proudly. 

It all went down in mid-September, at a time when the FiveThirtyEight polling averages showed the slightest of leads for Kamala Harris in North Carolina, a must-win state for Trump. Her edge was short-lived: The averages moved back to favoring Trump. And Quantus Insights, a GOP-friendly polling firm, took credit for this development. When a MAGA influencer celebrated the pro-Trump shift on X (formerly Twitter), Quantus’s account responded: “You’re welcome.” 

The implication was clear. A Quantus poll had not only pushed the averages back to Trump; this was nakedly the whole point of releasing the poll in the first place.

To proponents of what might be called the “Red Wave Theory” of polling, this was a blatant example of a phenomenon that they see as widespread: A flood of GOP-aligned polls has been released for the precise purpose of influencing the polling averages, and thus the election forecasts, in Trump’s favor. In the view of these critics, the Quantus example (the firm subsequently denied any such intent) only made all this more overt: Dozens of such polls have been released since then, and they are in no small part responsible for tipping the averages—and the forecasts—toward Trump.

Coming at a time when right-wing disinformation is soaring—and Trump’s most feverish ally, Elon Musk, is converting X into a bottomless sewer pit of MAGA-pilled electoral propaganda—these critics see all this as a hyper-emboldened version of what happened in 2022, when GOP polls flooded the polling averages and arguably helped make GOP Senate candidates appear stronger than they were, leading to much-vaunted predictions of a “red wave.” Most prominently, Democratic strategist Simon Rosenberg and data analyst Tom Bonier, who were skeptical of such predictions in 2022 and ultimately proved correct, are now warning that all this is happening again. 

In their telling, GOP data is serving an essential end of pro-Trump propaganda, which is heavily geared toward painting him as a formidable, “strong” figure whose triumph over the “weak” Kamala Harris is inevitable. This illusion is essential to Trump’s electoral strategy, goes this reading, and GOP-aligned data firms are concertedly attempting to build up that impression, both in the polling averages and in media coverage that is gravitationally influenced by it. They are also engaged in a data-driven psyop designed to spread a sense of doom among Democrats that the election is slipping away from them.

But the guardians of our nation’s polling averages at FiveThirtyEight, The New York Times, and elsewhere all adamantly deny that GOP polls are seriously harming their averages and forecasts, and they offer their own data-driven case to back that up. So who’s right?

We think many of the worries about a “red waving” of the polls are legitimate—indeed, that’s a view shared in part by one polling aggregator and several former GOP strategists we interviewed. But the aggregators do offer a plausible defense of their methodologies, and it’s simply impossible to know who will be proven right about the correct level of concern here until after Election Day.

In many ways, the polling debate of 2024 comes down to this dilemma. On the one hand, pollsters undercounted Donald Trump’s vote in 2016 and 2020. On the other, in 2022, some of the averages, fed by GOP data, inspired certain observers to discern the infamous red wave that never materialized. So the question now is: Will 2024 be more like 2016 and 2020, presidential elections in which there was a hidden Trump vote, or will it be more like 2022, a midterm campaign but the first post-Dobbs election, when at least some observers missed the Democratic vote that turned out in no small part in response to the Supreme Court taking away the right to an abortion? 

The 2022 cycle also arguably saw a new phenomenon really come to the fore: the rise of openly right-leaning pollsters that consistently showed better results for Republican candidates. Now these questions have once again arisen: Should these pollsters be included in aggregators’ averages or not? And what should you think of the case for their inclusion made by the aggregators, which is that they weight polls in a way that reflects their comparative credibility?

“It’s ridiculous that Democrats are being asked to accept the integrity of polling averages when a plurality or a majority of the polls are coming from right-aligned organizations,” Rosenberg, the author of the Hopium Chronicles on Substack, tells us. The point, he adds, is to “get the entire mainstream analytical community saying the election is slipping away from Harris.”  

It’s worth understanding why aggregators see value in averaging the polls in the first place. The basic premise behind the idea—and behind including as many polls as possible in those averages—is that the more data one has, the more likely the polling is to offer a reasonably accurate picture of a race. More data means a much larger overall sample, the better to avoid a sampling error; more polls also make it possible to track the trajectory of the race in a granular way from moment to moment.

That all certainly sounds good. But what happens if a substantial bloc of the polling that is added to the averages is gamed or, short of that, is uniformly wrong or biased in one direction?

Theoretically, this should game the averages as well. Something like this happened in 2022: As Nate Cohn wrote for the Times on the eve of that election, the averages were being bombarded by “a wave of polls” from firms that didn’t “adhere to industry standards for transparency or data collection” and which were producing “much more Republican-friendly results.” Democrats ended up defying the results suggested by some of the averages, picking up a Senate seat and holding House losses to a minimum—itself a historically anomalous result for a party holding the White House in a midterm election—even as many predicted a GOP rout.

Some of the pollsters that got those races wrong are the same ones pumping out polls right now on the presidential race. One notable example is Trafalgar, which released polls in 2022 that showed five Republican Senate candidates either ahead or much closer than they ended up finishing. The most notable of these was in Washington state, where a Trafalgar poll in late October showed Democratic incumbent Patty Murray up by just 1.7 points over GOP challenger Tiffany Smiley. That poll generated a raft of “Is Patty Murray in trouble?” stories, the idea being that if even Murray was sinking in very blue Washington, then maybe a huge red wave really was gathering force. (Murray won by 15 points.) 

We’re seeing a similar bombardment in the presidential race this time around. So what is it doing to the averages? 

The keepers of the averages insist that the impact is very minimal. Outfits like FiveThirtyEightSplit Ticket, the Times’ in-house polling tracker; and Nate Silver’s forecast all take methodological steps ostensibly to ensure that “garbage-in” polls don’t lead to “garbage-out” results. These include downgrading the “weight” of polls thought to be systematically biased so they have less influence on the averages than high-quality polls do. (FiveThirtyEight has detailed criteria for determining whether pollsters are high quality, including empirical accuracy and methodological transparency.) Another step is adjusting for a particular pollster’s “house effects” to downplay biases.

Is all this working? The keepers of the averages say yes. G. Elliott Morris, who runs FiveThirtyEight, recently calculated that if the averages only include high-quality polls—and not GOP-aligned ones—the results are in some states less than one-half a point different. The Times’ Cohn, who recently acknowledged that we’re seeing a “deluge of polls from Republican-leaning firms” in the averages, ran a similar calculation and found the results moving only imperceptibly.

We see no reason to doubt the accuracy of those calculations. If news consumers are going to trust the curation of high-quality polls that outlets like FiveThirtyEight conduct, then it’s also understandable if they give some weight to these reassurances. And given the larger context here—that is, how inexact a science even high-quality polling tends to be—one can see why aggregators would suggest that tiny shifts in the averages, even ones seeded by GOP polls, don’t warrant too much concern.

But all this raises another question: Why include GOP-leaning polls in the averages in the first place?

Those doing the polling averages do offer a nontrivial argument for including them. It’s that having more polls—particularly now, when polling is very expensive and news outlets are doing it less often—allows one to track the trajectory of the race more closely, says Lakshya Jain, co-founder of Split Ticket. The result, he says, is that properly weighted data from those firms adds more value than it subtracts.

What’s more, casting out pollsters raises other methodological challenges: Where exactly do you draw the line between a GOP-leaning pollster whose data is somewhat biased but still valuable if weighted properly and one who is producing data that’s so beyond the pale that it should be excluded? Jain—who says Split Ticket interviews pollsters about their methodologies to ensure that they are conducting something recognizable as real polling—believes excluding such data carries its own risk of biasing data via exclusion.

“You generally don’t want to throw data out if you can avoid it,” Jain says. “You just have to treat it carefully.” Jain notes that “throwing data out” risks not being “honest” as well, in the sense that this decision could be influenced by a different form of “bias.” 

Still, it’s worth noting that Split Ticket doesn’t include polls from Trafalgar, Rasmussen, or ActiVote in its averages, suggesting that it is possible to draw a line somewhere, which, if crossed, leads to the exclusion of certain pollsters. But it’s not a simple matter to locate where that line should be.

All of which raises another problem—a potentially serious one.

Rosenberg and Bonier, the leading critics of these polling aggregations, are quick to point out that even shifts of a small magnitude produced by GOP polls risk badly misleading people. 

Take Pennsylvania. We examined all the polls from October that FiveThirtyEight includes in its averages. As of October 22, there have been 19 polls conducted wholly in the month of October so far. Eleven are either from right-leaning firms or from firms polling for right-wing news outlets, such as the Daily Mail and The Telegraph. Seven were conducted by nonideological pollsters, sometimes in conjunction with mainstream news outlets (the Times and The Washington Post), sometimes on their own. Trump leads in the right-leaning polls by an average of 1 percent; Harris leads in the other polls by an average of about 1.7 percent. (The remaining poll was done by Atlas Intel, a Brazilian firm about which little is known and which is polling heavily this cycle.)

We should emphasize: We don’t know which firms are “correct.” In any case, they’re all within the margin of error. But the pattern here is clear: Many right-leaning pollsters (and their clients) are producing polling that is narrowly more pro-Trump. 

How much does that matter? In the aforementioned calculations run by FiveThirtyEight’s Morris, he posits that the polling averages in Pennsylvania that include the GOP-aligned firms are around 0.8 points more favorable to Trump than the ones that don’t. 

That seems small, but as of October 22, FiveThirtyEight’s overall averages had Trump ahead in the state by 0.3 points. So it’s reasonable to assume that without the 0.8 points awarded to Trump partly by the inclusion of GOP-friendly polls, Harris—and not Trump—might be narrowly ahead. 

In the real world of media spin wars, that sort of difference does matter. In the last week or so, when the averages edged toward Trump, both TV commentators and Twitter accounts cited the tiniest of leads for Trump as evidence that he’s currently winning the state. Even more irresponsibly, some outlets assign candidates electoral votes based on such narrow leads. The GOP polls nudged the averages by less than a point, but they also arguably moved them in a way that prompted people to declare that Trump is now winning—not even just leading, but winning—the election. 

The aggregators argue that these tiny shifts aren’t really a problem. In their view, a lead of 0.4 points isn’t actually a lead: They view it as statistically insignificant, as pretty much the same as a lead of 0.4 points in the other direction. 

But the perception this creates of a shift is indeed a problem. To illustrate the point, Bonier says that he sometimes gets calls from journalists who, prompted by such tiny movements to Trump, are looking to write stories about a momentum shift his way. “I had multiple reporters reaching out to me asking what’s wrong with the Harris campaign when the polling averages moved half a point toward Trump,” Bonier told us.

Rosenberg believes that this small nudging of polls, for the express purpose of shifting the averages just enough to put Trump ahead, is the primary goal of GOP pollsters flooding the averages in the first place. In this understanding, it does not matter if the shift is negligible (as the aggregators claim), as long as it accomplishes the goal of putting Trump narrowly in front and giving spinners grist to proclaim the race is moving his way. “The only reason the Republicans would be doing this,” Rosenberg says, “is if their own internal data was telling them they are not winning the election.”

Jain, of Split Ticket, allows that this can create a perception problem. “A shift from Harris up 0.5 to Trump up 0.5 is a lot less significant than people believe,” Jain says. “Aggregators should be clearer on this point. Republicans since 2016 have been obsessed with projecting strength. I do think that some of the lower-quality Republican-aligned pollsters try to create effects that show Trump surging or leading.”

Jain cites as an example the case that led this article: the GOP-aligned Quantus Insights openly boasting that one of its polls moved the averages toward Trump. As Jain noted, it should raise concerns for aggregators that “one of the explicit purposes” of some of these polls “is to manufacture a shift in the race.” 

Jain also acknowledges that the argument for removing low-quality GOP pollsters from the averages is reasonable. “I may not agree with it, but I think that’s a totally valid and defensible methodological approach,” he said. Still, Jain thinks including adjusted data is a better choice.

All of this can have serious real-world consequences. As the Times reported after the 2022 elections, red-wave-polling-fueled perceptions of the races ended up producing pessimism even among Democratic operatives, leading to a situation where candidates in potentially winnable races were denied party resources, possibly influencing outcomes.

Yet even now, Democratic pollsters differ on how much red-wave polls matter. Guy Molyneux, the veteran Democratic pollster with Hart Research, agrees that some pollsters are lower quality and “deliberately biased” but thinks Rosenberg overstates their impact. “They should either be ignored or weighted down heavily in poll averages—which they are, at least for the most part,” Molyneux said. “But I’m not sure how much impact they really had in 2022. Campaigns and the parties mainly rely on their own polling, which was largely accurate.” Molyneux added that if Democrats got spooked by red-wave polling in 2022, it was more likely due to past off-year election results and “a general tendency of Dems to be anxious.”

But Cornell Belcher, the Democratic pollster often seen on MSNBC, thinks it’s obvious that the polls are being gamed, and that this matters a lot. “Are you fucking kidding me?” he said. “I said this several years ago. I’m glad it’s catching on.” He sees GOP firms as using polling not simply to deliver accurate assessments of where races stand but “to drive fundraising and outward narratives just as much as internal strategies.”

Michael Steele, the former Republican Party chairman who is strongly anti-Trump but has retained his GOP registration, agrees that some of these polls are gamed. “I know they are,” he said. “I’ve talked to enough people on this side of the street to know they are.” The way it’s done, he says, is that “you find different ways to weight the participants, and that changes the results you’re going to get.” That’s a striking allegation coming from such a longtime senior GOP insider.

Steele also shares some of the same worries as Rosenberg, Bonier, and others—that pro-Trump pollsters are feeding a perception of a Trump lead to provide his allies a way to blame a Trump loss on a rigged election. “They’re gamed on the back end so MAGA can make the claim that the election was stolen,” he said.

Stuart Stevens, the anti-Trump former Republican pollster, agreed. “Their game plan is to make it impossible for states to certify. And these fake polls are a great tool in that, because that’s how you lead people to think the race was stolen,” Stevens said. All manner of postelection mischief-making, and maybe even political violence, would be thus justified.

In an age of cell phones and people ignoring unfamiliar phone numbers, pollsters have a hard job. And those who sift through the hundreds of polls trying to determine each pollster’s methodology and overall legitimacy have a hard job too, especially in a race this close.

But it would be extremely helpful if media figures stopped letting small data shifts set whole media narratives. That goes for partisans as well: When a state poll shifts from +0.3 for Harris to +0.3 for Trump, that’s no reason for Democrats to panic, just as a shift in the other direction is no reason to celebrate. It’s all within the margin of error. The data is simply too imprecise to tell us anything meaningful at this level of granularity. 

It would also help if aggregators would signal more clearly that a lead of 0.3 points in either direction isn’t actually a meaningful development, and it would be useful for pundits to stop flatly declaring on this flimsy basis that one candidate or the other is “winning.” And Democratic operatives really should stop getting so spooked when reporters call to inquire about a small shift in the race that they anonymously agree that they’re losing, when that’s not at all clear based on the actual data.

We now know that the wave of data coming from some right-leaning pollsters is posing serious methodological challenges. We also know that these pollsters are influencing the discourse in a way that risks misleading people. How off the mark all this data really will prove to be remains to be seen. But the question of whether operatives, journalists, and news consumers are going to let that data shake up our perceptions in the present—well, that’s very much within our control. So let’s choose not to do this instead.