You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation
Legal Gibberish

A Judge Says Biden Can’t Scold Social Media Firms. That Makes Zero Sense.

His ruling seems more interested in ideological grievance than the First Amendment.

Peter Zay/Anadolu Agency/Getty Images

On the Fourth of July, a federal judge issued a ruling in a free-speech case that he said upheld Americans’ “right to engage in free debate about the significant issues affecting the country”—by limiting how certain Americans can debate significant issues affecting the country.

If that sentence looks off to you, you have company among First Amendment scholars.

The seven-page injunction that Judge Terry A. Doughty issued in State of Missouri, et al. v. Joseph R. Biden Jr., et al. bans the leadership of four cabinet departments, along with dozens of named officials, from “urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms.”

In fewer words: “Feds, shut up.”

Doughty unpacks that injunction in the U.S. District Court for the Western District of Louisiana with a 155-page opinion that leads off with a sentence inviting speculation about whether the judge’s law-school curriculum skipped the Sedition Act: “If the allegations made by Plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States’ history.”

The alleged offense here is persistent, often insistent, efforts by officials in the White House, the Department of Health and Human Services, the Centers for Disease Control, the Department of Homeland Security, the Cybersecurity and Infrastructure Security Agency, and other federal entities to induce Facebook, Twitter, and other platforms to stop spreading misinformation about Covid-19 and election security, among other trending topics.

The opinion documents regular do-more hectoring by Biden administration officials of trust-and-safety staff at those social networks—plus such statements by President Biden as his July 2021 comment that letting anti-vaccine lies run rampant was “killing people.”

But does scolding equal compulsion?

“There is a First Amendment line that the government could in theory cross if it coerced a social media company,” said Samir Jain, vice president of policy at the Washington-based Center for Democracy & Technology. But, he added, the ruling doesn’t show that line crossed.

Saying “I am no fan of the government butting into content moderation decisions,” Ari Cohn, free speech counsel at TechFreedom, a Washington think tank, added, “in this case, I don’t really see that level of coercion.”

His take: “I see perhaps inappropriate meddling by the government, but nothing that really shows that platforms were worried that if they didn’t do what the government was asking, there would be consequences.”

David Greene, civil liberties director and senior staff attorney at the San Francisco–based Electronic Frontier Foundation, concurred.

“This is a serious issue deserving of serious attention and the court does not parse acceptable from unacceptable communications,” he wrote in an email. “It seems to find constitutional coercion in every interaction even when the record seems pretty thin.”

Cathy Gellis, a Bay Area lawyer who specializes in digital rights cases, said this ruling’s recounting of the administration telling platforms “do this or else” lacks a necessary component—“what it doesn’t substantiate is that there was an ‘or else’.”

In fact, Gellis pointed out, the most direct threat—reform of Section 230 of the Communications Decency Act, the 1996 law holding online forums generally not responsible for things their users post and instead placing that liability on the users themselves—is one that the executive branch can’t carry out, because it isn’t the legislative branch.

“I think some of the behavior of the administration was a little untoward,” she said. “But there was no cudgel to use.”

And as The Washington Post’s Philip Bump observed, the opinion’s assertion that then–White House communications director Kate Bedingfield said in a July 20, 2021, White House press conference that social platforms would face a presidential determination of their CDA 230 liability for misinformation itself falls into “[Citation needed]” territory.

Bedingfield did appear on MSNBC that day and spoke less specifically about holding social platforms accountable, but the White House schedule for that day shows no such presser by her.

The Trump administration had been much more vociferous in denouncing content-moderation calls by social platforms, including a bizarre memo directing the Federal Communications Commission to rewrite CDA 230 through regulations. But that episode gets no mention by Doughty—who was nominated by President Trump in 2017 and confirmed by the Senate in 2018 in a 98–0 vote.

The actual incidents that Doughty cites make the ruling even more problematic. Lead plaintiffs Louisiana and Missouri contest the application of content moderation not just to people questioning the origins of the pandemic or Twitter’s brief and quickly regretted ban on sharing the New York Post’s story about Hunter Biden’s laptop, but also to a Star Wars cantina’s worth of cranks, conspiracy theorists, and other occupants of the Fox News Extended Universe.

“What is really telling is that virtually all of the free speech suppressed was ‘conservative’ free speech,” Doughty declares.

The more obvious pattern uniting the likes of Covid denialist Alex Berenson, anti-vaccine advocate and Democratic presidential candidate Robert F. Kennedy Jr., and election-denying Gateway Pundit publisher Jim Hoft is their epistemic closure.

“It’s very much ideologically aimed toward a particular set of grievances,” said Cohn of the judge’s recap.

As Twitter has learned after Elon Musk’s chaotic reign brought neo-Nazis back on the platform, advertisers often don’t want to see their messages appear next to Infowars guests.

That can happen without government intervention: The opinion cites examples of LinkedIn booting Covid skeptics but does not mention any instances of the feds berating that Microsoft platform over Covid misinformation beforehand.

Meanwhile, the Covid prescription offered by Berenson, Kennedy, and their ilk—take your chances with the virus, not the vaccines—can leave audiences not just misinformed but dead.

“It’s a bad ruling, in the sense that it will make it that much harder to combat misinformation or disinformation in the case of anti-vaccine content,” emailed Angela Rasmussen, a virologist at the Vaccine and Infectious Disease Organization at the University of Saskatchewan. “That’s very bad news for using these platforms to spread reliable information.”

(I should note here that as a vaccination-clinic volunteer and poll worker, I probably qualify for inclusion on these people’s Enemies Lists.)

But in Doughty’s world, if the government strenuously objects to a social platform’s conduct, the platform’s agency evaporates: “Therefore, the question is not what decision the social-media company would have made, but whether the Government ‘so involved itself in the private party’s conduct’ that the decision is essentially that of the Government.”

Having thus concluded that during the pandemic, the government “assumed a role similar to an Orwellian ‘Ministry of Truth,’” Doughty imposed the gag order on so much of the executive branch—which TechFreedom’s Cohn and CDT’s Jain separately called a clear case of prior restraint on free speech that the Supreme Court has held requires the strictest scrutiny.

(The Biden administration appealed the injunction Wednesday, hours after the State Department canceled a scheduled meeting with Facebook over the security of the 2024 election.)

Except, that is, for eight exceptions in which the feds may jawbone away. Those include criminal activity, national security, “foreign attempts to influence elections,” “attempts to “mislead voters about voting requirements and procedures,” preventing “malicious cyber activity,” and “permissible public government speech promoting government policies or views on matters of public concern.”

(The issue of whether a pandemic qualifies as a matter of public concern seems to have been left as an exercise for the reader.)

The absence of medicine from that list remains striking, Greene noted: “The subjects the court lists reveals a lot about the court’s own thinking about subjects the government has a legitimate concern in advancing.”

Or as Jain put it: “The court ends up making the very kind of content-based decisions that it accuses the executive branch of doing.”