You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Can Tech Stand Against White Supremacy?

The alt-right relies on social media to spread hate, and Silicon Valley can stop them. Will they?

Justin Sullivan/Getty Images

Long before Charlottesville, white supremacists have found welcome homes on many of the most popular tech platforms. As the New York Times summarized it, that has meant they use “Twitter, Facebook, and YouTube for recruiting and public broadcasting, Reddit and 4Chan for lighthearted memes and trolling,” and other apps for organizing and on-the-ground communication. On the surface, they may not have much in common with the denizens of Silicon Valley, but like the Islamic State and other extremist groups, white supremacists have shown little compunction about using tools created by their putative enemies.

In the wake of last weekend’s protestsriots, during which Heather Heyer was murdered by a white supremacist, this quiet accommodationism is beginning to change. As the Times noted, Discord, a private chat app, has begun kicking white supremacists off its network. The same goes for some domain registrars; the Daily Stormer, the deeply racist epicenter of online white supremacist discourse, was booted off of Go Daddy, then Google, only eventually to finding a home on a Russian server. Some payment systems, including a popular Bitcoin bank, have cut off service to the website. Airbnb has also attempted to prevent white supremacists from using their platform for bookings. Meanwhile, the firing of Google engineer James Damore, who authored and distributed a sexist screed about his colleagues, has led to him becoming a darling of the alt-right and spurred right-wing protests outside some Google offices.

This is potentially the moment for tech companies to brandish some moral authority, but it’s not clear if they’re capable of asserting it anything but haphazardly, with an eye more toward pleasing optics-minded investors than a public tired of seeing racist extremists in their news feeds. How else to explain the agnostic stand of companies like Twitter? The flailing micro-blogging company not only allows leading white supremacists like David Duke and Richard Spencer to use its platform but has also, inexplicably, verified Spencer, thereby elevating his status. (Among other hate-mongers granted the privileged blue checkmark is Richard Taylor, the editor of American Renaissance.) Their ilk has also taken to Periscope, the Twitter-owned live-streaming app; Duke’s latest Periscope broadcast, in which he praises President Trump and inveighs against Jews and other groups, has more than 200,000 views.

For years, tech companies largely ignored the problem of extremists, instead hiding behind the astringent language of terms of service agreements, which often obligate platform owners to do little more than provide a promised service. They also had the protective mantle of the Communications Decency Act, which essentially shields internet companies from legal liability for content that appears on their networks. The law has been a boon to tech companies—the EFF calls it “the most important law protecting internet speech”—but also provided them little incentive to police hate speech. The ever-spreading war on terror upset this fragile dynamic, putting the tech industry in a tough position. The rise of ISIS, with its facility at getting its message out in graphics-rich packages distributed over Twitter, YouTube, Telegram, and other popular platforms, pushed tech companies, including the usually reticent Twitter, to become more proactive in banning extremist users and yield to an FBI that frequently asks them to turn over user data.

Now the problem has come home, with domestic white supremacist extremism representing a greater threat to the public than any foreign jihadists. And they’re using the same technologies to organize themselves. Terms of service agreements often leave plenty of wiggle-room for companies to remove content or eliminate access if they come across something they don’t like, but it’s a role they’re reluctant to play. Extremist content “only becomes an issue if reported, or they get some type of negative coverage, or have a particular interest to curate/censor the content,” says Ajay Sharma, a lawyer who specializes in writing ToS agreements. Some companies, like Reddit, have been content to let their platforms be essentially free-fire zones where almost all political beliefs, no matter how racist or misogynist, are welcome.

At the same time, Sharma points to stricter standards for monitoring hate speech currently being developed in the European Union. “Companies with a lot of influence and exposure (like Facebook, YouTube, etc.) are prime targets for the EU, so they’ve made efforts to get ahead of EU regulations by proactively adopting policies where they prohibit objectionable content.”  The result is that for now, it’s “up to the provider [to determine] what qualifies as objectionable.” That can lead to some some instances of odd or over-zealous enforcement—a video of Allied soldiers blowing up a swastika in Nuremberg was recently taken off YouTube for violating hate speech guidelines. 

As tech companies move to a more interventionist posture, it’s worth asking once again whether this dawning awareness will force Twitter to do something about its most infamous user. Donald Trump regularly uses his feed to promote lies, media conspiracies, and talking points that, if not directly out of the white supremacist playbook, certainly would fit comfortably within it. His tweets provide hope and succor to extremists like David Duke while fanning hatred against immigrants, journalists, Muslims, and other perceived enemies. Twitter may not be legally liable for what Trump tweets (for which we can thank Section 230 of the CDA), but the company does enable him in spreading misinformation to millions. Twitter is not the government; it has no obligation to guarantee speech rights to Donald Trump, Richard Spencer, or anyone else. It could easily do more to prevent these people from using its service—provided that the beleaguered social network doesn’t mind banning some of its most engaged users. 

That points to a larger problem with the political role of tech companies as custodians of our data state. Our digital public sphere depends on the largesse, cooperation, and occasional altruism of private corporations that have unending appetites for personal information. Pitting these profit-driven, data-hungry entities against a racist regime and its extremist supporters seems unlikely to lead to a victory for the public good. Recent events have proven the utility of these platforms for organizing and disseminating speech of all types. That leaves a nagging question that all tech giants should be forced to answer: Whose side are they on?