You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

Imagining the Post-Trump Internet

The idea that social media platforms need to be made civilly or even criminally liable for harmful content appears to be gaining ground in Congress.

Michael B. Thomas/Getty Images

On Monday, Trump declared victory in his battle with the coronavirus. By Tuesday, he had moved on, fighting the invisible enemy he apparently blames for keeping his (false) Covid-19 tweets from the public. “REPEAL SECTION 230!!!” he wrote, meaning Section 230 of the Communications Decency Act, which, in part, defines what kinds of user-generated content websites can host without facing legal liability. In Trump’s mind, Section 230 is enabling platforms to remove his content or any material they find politically objectionable. Instead, the law is what makes platforms like Twitter possible without opening the company up to liability for everything posted by every user. Without 230, platforms would have to act even more like censors, like police.

But focusing narrowly on 230 misses a significant point: While Trump may have been angered by his misinformation tweet being labeled as such, in truth, when a service like Twitter decides to crack down on content, it goes after those people who are far less powerful. While Trump was hospitalized, Twitter announced that “[c]ontent that wishes, hopes or expresses a desire for death, serious bodily harm or fatal disease against an individual is against our rules,” and that it would remove tweets or put offending accounts into read-only mode. The countless users who had experienced death threats on the platform quickly spotted the power play: When Black, trans, or Muslim people on the platform have been threatened with violence, Twitter has largely looked the other way.

Still, Trump is not alone in targeting this law, in what has become a messy fight over big tech; content moderation; and the unaccountable power of platforms like Twitter, Facebook, and YouTube. The debate over tech platforms’ power has drawn in civil rights activists who typically don’t touch internet policy. It has put Facebook and the power held by CEO Mark Zuckerberg and COO Sheryl Sandberg under intense public scrutiny, though little has changed. The fight has transcended party lines: Republicans and Democrats both compete, in congressional hearings, to rake tech executives over the viral internet coals. And the idea that social media platforms need to be made civilly or even criminally liable for harmful content appears to be gaining ground in Congress.

A new bill introduced last week by Senators Joe Manchin and John Cornyn, the See Something, Say Something Online Act of 2020, joins a host of others introduced in the past year—bills with equally unwieldy titles like the PACT Act, EARN IT, and BAD ADS—all concerned with how these platforms are both empowered and protected by Section 230 of the CDA, which has been described as the law that made the internet as we know it possible or the law that made the internet a “free for all.”

Senator Josh Hawley, a Republican from Missouri, has waged his own personal war on 230 based on the false premise that social media platforms have an anti-conservative bias, which has now become a rallying cry in venues from the Conservative Political Action Conference to Twitter itself. In May, Trump posted a tweet that just said “REVOKE 230!”—like Tuesday’s tweet, no further explanation needed for his audience. That was as the national uprising in the wake of Minneapolis police killing of George Floyd was just beginning, when social media again made it possible to share evidence of police violence and calls to action quickly and widely. Attorney General Bill Barr issued guidance that people he deemed to be “antifa” should be regarded as domestic terrorists; protest photos became bread crumbs for police looking to charge activists with rioting and other crimes. Then came Barr’s recommendations on revoking 230 protections from “bad actors who purposefully facilitate or solicit content that violates federal criminal law or are willfully blind to criminal content on their own services.”

It’s difficult to predict how any of these rollbacks to 230 might ever be enforced. What we do know is that the users who are currently experiencing the most harm in connection with social media platforms’ amplification of racism, nativism, and calls for violence are themselves high on the list of those who have the most to lose as the rhetoric against online platforms escalates. As Evan Greer, deputy director of Fight for the Future, told me after the introduction of the See Something, Say Something Act, “In a world where the current sitting administration calls Black Lives Matter activists ‘terrorists,’ it’s not hard to imagine how that will play out in practice.”

Overly broad bills—some with bipartisan support—end up pushing platforms to act more like police than community managers. Social media company monopolies, surveillance-as-business-model, algorithmic discrimination: The problem with the platforms is deeper than the content they refuse to take down. The whole debate over Section 230 and platform regulation, as it stands now, serves to entrench the power of the platforms.

The bill introduced by Manchin and Cornyn this week is part of a pattern now, of anti-230 bills bringing together Republicans who want to go after platforms over bias against them (that doesn’t exist) and Democrats who believe (often correctly) that platforms aren’t doing enough to protect users. “Proposing a bill amending 230 has become the congressional equivalent of mayors painting Black Lives Matter on a street,” said Kendra Albert, clinical instructor at the Cyberlaw Clinic at Harvard Law School. Sometimes, the bills are just an “artificial, cynical” gesture, they told me. But these proposals can also do real damage: by forcing the removal of content that has nothing to do with what was deemed harmful or making it harder to find people who are doing the harm once they are pushed off those platforms. “Passing one of these bills would be worse,” Albert said. “It’s painting Black Lives Matter on the street and then increasing the police budget to hire officers to guard it.”

Oddly, or appropriately enough, it was a mid-1990s panic about online porn that prompted the CDA, a set of amendments to the more sweeping Telecommunications Act. Senator Ron Wyden, then a congressional representative, along with (now former) Representative Chris Cox, were concerned that efforts to regulate platforms would backfire, resulting in either platforms broadly suppressing content posted by users (so they could steer clear of any ambiguous cases) or in a complete hands-off approach (so they could claim they had no control at all over what their users post and thus were not liable as the publishers). In response to these issues, Wyden and Cox proposed what became Section 230 of the CDA, which states that an “interactive computer service” cannot be held liable for content posted by its users, whom the law considers to be responsible for their own content. Section 230 also protects platforms from being considered “publishers” should any try to moderate or remove content published by users. Since passage in 1996, the CDA’s “decency” provisions have been ruled unconstitutional by the Supreme Court, but Section 230 remains.

Where Section 230 collides with the current conflict over harmful content is this: 230 is commonly cited as the reason that Facebook and other platforms have no reason to act to remove such content. That speaks to the one element of 230—platforms are not legally liable for content posted by their users—but it overlooks what’s critical about the other part, which was meant to keep platforms from throwing up their hands and doing nothing lest they lose legal protections themselves. Some critiques of 230 today say this provision is just too outdated to apply to our current experience of the internet, where social media platforms reign supreme and where both users and profits have soared. But understanding that concentration of power is key to understanding what 230 is and does.

Social media companies have used that power without any significant accountability to their users. Twitter enabled the coordinated harassment campaign known as GamerGate, targeting queer women and women of color, when users could swarm certain hashtags and bombard the people who used them with abuse. Facebook’s infamous “real names” policy locked drag performers and Indigenous people, among others, out of their accounts, because their names were deemed illegitimate. Sex workers have had their content removed or suppressed from just about every social media platform, according to a new report from the collective Hacking//Hustling. (In 2018, I helped the group organize a conference for sex workers, activists, and journalists, to address these issues.) When social media platforms targeted comparatively powerless and marginalized people, they mostly got away with it, and in some cases, still do. Their content bans and account deletions were not considered politically significant. Neither was the evidence Black feminists began collecting in 2014, after being targeted by mass impersonation and trolling campaigns, exposing nascent networks of men’s rights activists and white supremacists who engaged in coordinated harassment.

But 2016 changed that: With Trump in the White House, far-right online activity was made mainstream political news. More people began to question the power of social media platforms, which helped lend such networks the tools to organize and mobilize. There was more, unraveling in leaks and investigations: targeted Facebook ads placed by the Trump campaign meant to deter Black people from voting, and fake Twitter accounts to amplify disinformation useful to the campaign. When some went looking for answers as to how that could happen, 230 became one of their targets, if not a scapegoat.

Now, given the stakes of the 2020 election, holding platforms accountable for their role has become synonymous with having a free election or a free country. “Facebook, America and the world look to you now for decisive action to protect democracy,” said The Age of Surveillance Capitalism author Shoshana Zuboff, leading a press conference last Wednesday, held by the group calling itself the Real Facebook Oversight Board, in a direct challenge to Facebook’s own opaque efforts. If Facebook doesn’t stop its targeted spread of Trump’s misinformation to an audience of millions, said fellow board member and Harvard Law School professor Laurence Tribe, “We could be witnessing the end of the great American experiment.”

With those kinds of risks before us, it is all too possible that in exasperation and in fear, rolling back or repealing Section 230 will seem like the answer to unchecked corporate and state power. What’s difficult about wading through this is there’s a mix of concerns, some real (white supremacists), and some manufactured (alleged viewpoint bias against conservatives), and some really complicated (harassment of all kinds). There’s a kind of moral panic about Section 230, on the one hand, and, on the other, “very real concerns about the ways that surveillance capitalism business models are harmful and are fundamentally incompatible with basic human rights and democracy,” said Evan Greer of Fight for the Future, which is currently running a campaign to protect 230, and is having to navigate all of this.

Greer is very familiar with how political activism can be swept up in platform moderation efforts. In May, trying to draw attention to an upcoming congressional reauthorization of the Patriot Act, Greer posted a Vice News story about it, in which she was quoted, on Facebook. “When I logged on last night I saw something I had never seen before,” she wrote later, “a notification that read ‘Partly false information found in your post by independent fact checkers.’” At first, she thought it was a glitch. It turned out that Greer’s post was flagged because of a USA Today fact-check. The news outlet was a member of Facebook’s third-party fact-checking program, a response to demands that it deal with rampant misinformation on the platform. The “partly false” tag resulted from a fact-check of an entirely separate post from a Libertarian Facebook group on the same vote; that USA Today fact-check (“Claim: 37 senators ‘voted for federal agencies to have access to your internet history without obtaining a warrant’ … Our ruling: Partly false”) was then appended to Greer’s post, implying that the Vice News story was partly false, too. (The issue appeared pretty minor: The Senate had voted not to vote on an amendment, rather than voting “for” something outright.)

After Facebook flagged Greer’s post, anyone who had shared it would see the same notice, overlaying the preview of the article. The fact-checker overstepped, she wrote, but there was no official process for appeal. There is an unofficial one: When the same thing happened to anti-choice activist Lila Rose, after she posted a video falsely claiming that abortion is never medically necessary, she complained on Twitter. Senator Hawley wrote a letter to Facebook complaining. Facebook removed the notification, far from the only time it has caved to right-wing pressure.

“Many of the people that are most harmed by the collateral damage of content moderation are people without a lot of power,” Greer told me, like Muslims whose content gets caught up in filtering tools allegedly targeting “terrorism” content, as well as LGBTQ people and sex workers who get flagged as “adult” content. Liberals and people on the left need to exercise caution when making demands about content moderation, said Greer, given who has been harmed by such calls already. “Those people’s speech doesn’t seem to be very important to nonprofits in D.C. who just want to score points against Trump, and that really worries me.… They could end up paving the way for censorship of those marginalized voices if they aren’t taking those concerns seriously.”

Greer does not underestimate the harm done by social media companies when they amplify “the worst things on the internet.” They should be held accountable for that amplification, but that’s not enough. Working toward creating a better internet is not equal to working toward a better world. “It takes long-term movement-building to root out those harmful ideologies from our movement, from our communities, from our societies. There isn’t some silver bullet switch you can turn on or off.”

That kind of magical thinking, along with the “painting Black Lives Matter on the street”–type gestures that Kendra Albert flagged, then ends up in the various bills to roll back Section 230. The new bill from Senators Manchin and Cornyn is just the most recent example: It’s meant “[t]o require reporting of suspicious transmissions in order to assist in criminal investigations and counterintelligence activities relating to international terrorism, and for other purposes.” Aaron Mackey, staff attorney at the Electronic Frontier Foundation, said there would be no way to enforce this, if passed as written, without companies prescreening all content posted online. It’s not just social media companies that would have to comply, he told me. “Any service that we use that has third-party content”—Slack, or Substack, or Patreon, for example—this bill would make them “agents for the government, to basically snoop on their users and treat them as suspects, as potential individuals who will use their services in ways that are illegal.” The bill would potentially empower these companies to further collect and trade our content and data—not with ad companies and political campaigns but with the federal government, in exchange for companies maintaining their 230 protections.

The conduct the bill targets is also dangerously broad—“any public or private post, message, comment, tag, transaction, or any other user-generated content or transmission that commits, facilitates, incites, promotes, or otherwise assists the commission of a major crime.” Say you organize a protest or action using social media, said Mackey. “You used a digital service to communicate and to encourage people to attend.” If something happens there deemed a “major crime,” then, “are you now held liable, or is the service held liable for some general statement like, ‘Let’s organize, let’s march in the streets’?”

There is one precedent to look to in terms of potential impact. In 2018, Congress amended 230 with the passage of SESTA-FOSTA (the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act). It codified an exception to Section 230 for prostitution and sex trafficking. As a result, web sites that sex workers used to advertise, along with forums where they shared concerns about dangerous clients and workplace safety, began shutting down. Sex workers also reported to Hacking//Hustling that it’s not just their work-related online lives that have been targeted but their activism against laws like SESTA-FOSTA, with activist hashtags apparently hidden from search results, and in some cases, with accounts shut down.

Some of the same legislators behind SESTA-FOSTA, like Senator Richard Blumenthal, have now proposed another 230-targeting bill, the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020, or EARN IT, which is meant to hold platforms accountable for hosting child sexual abuse material. SESTA-FOSTA was supposed to do the same for sex trafficking. Before it passed, some anti-trafficking groups came out against it, saying would make it more difficult for them to bring prosecutions and would push sex trafficking to harder-to-track parts of the internet. A bill to study the impact of SESTA-FOSTA was introduced in December but is stalled for the moment. It’s possible EARN IT will be rushed through before understanding what SESTA-FOSTA achieved. “As lawmakers, we are responsible for examining unintended consequences of all legislation, and that includes any impact SESTA-FOSTA may have had on the ability of sex workers to protect themselves from physical or financial abuse,” said Senator Elizabeth Warren, a co-sponsor of the study bill. It would also provide a clear picture of how rollbacks to Section 230 don’t just impact the internet but have consequences for human rights. EARN IT appears to be moving ahead, though: A companion House bill was introduced Thursday.

“I would hope that we could look forward to a time when we can debate the merits of 230 on its terms but also in the broader context of dealing with these larger, intractable problems,” Aaron Mackey at EFF told me. One of those approaches, something else Warren supports, is breaking the monopoly these companies hold and the power that grants them. On Tuesday, the House released a report on large tech companies, including Facebook, describing the businesses, which once had been start-ups, as “the kinds of monopolies we last saw in the era of oil barons and railroad tycoons.” Democrats recommended legal reforms that could potentially lead to breaking up these companies—Apple, Amazon, Google, and Facebook—while Republicans stopped short of that, reportedly claiming this would harm economic growth.

Another approach is to be more clear about what role internet policy has and does not have in challenging far-right extremists, or in addressing child sexual abuse, or in ending the opioid overdose epidemic—which Manchin said is part of what drove his 230 bill. “We live so much of our lives online,” Mackey added, and that’s why people want to fight these issues online. “It seems like we’re attacking 230 because bad things happen online—and they do. But they don’t meaningfully address the underlying social problems.”

We can also do more to track the collateral consequences of demands made to online platforms already. “For me, it’s not a theoretical,” Greer told me. When her calls to action were flagged by association as misinformation, “this actually did harm to our movement and our ability to make change.”

Demands that platforms regulate speech tend to revolve around speech that could do harm, but those advancing them—including the president—don’t often consider how the policy changes that result from those demands will be turned against them. “We can’t separate the reality of how these companies act,” said Greer, “from what we are calling for.”