For all the talk of 2018 being the year of big tech’s reckoning, companies like Facebook, Google, and Amazon have gotten off easy. Yes, formerly sky-high public approval ratings are dipping, and representatives from Silicon Valley’s largest companies—including Mark Zuckerberg—have been publicly scolded in Washington, D.C. But despite growing concerns about tech’s economic and cultural power, rampant privacy violations, and general lack of oversight, the U.S. government has been all bark, no bite.
On Monday, we got a glimpse of what meaningful action could look like: a short paper from Virginia Senator Mark Warner’s office featuring 20 “potential policy proposals” for regulating tech companies, on issues ranging from national security to fair competition to consumer protection. It’s a crucial and belated first step for Democratic lawmakers, who have been long on complaints and short on solutions, but it also shows just how far Congress is from taking legislative action—and how complicated and politically fraught passing any regulation, let alone 20 separate ones, will be.
The policy paper is, in many ways, the inverse of Zuckerberg’s appearance before the Senate Commerce and Judiciary committees in April. That hearing, on “Facebook, Social Media Privacy, and the Use and Abuse of Data,” was something of a charade. When the Facebook founder wasn’t being interrogated about unrelated matters, like his platform’s supposed suppression of conservatives, he came across like a grandson patiently explaining the internet to his confused grandparents. Even those senators who used their time to discuss possible regulations seemed out of their depth.
Warner’s paper, by contrast, is a reminder that Senate staffers often know a lot more than their bosses (though Warner, a former tech executive, is presumably an exception on this issue). The document is careful; it never entertains the possibility of breaking up the big tech companies. The proposed regulations are sensible and, for the most part, not particularly radical. But they would go a long way to simultaneously protect consumers from exploitation and protect American democracy from foreign interference.
The paper considers how to hold tech companies accountable for policing their platforms and for ensuring that hostile actors are swiftly removed. It proposes that Facebook and others have a “duty to identify” and curtail inauthentic accounts, and to report regularly to the SEC on the percentage of such accounts. The paper admits that this is a tricky one: weeding out fake accounts may require stricter identity verification requirements, “at the cost of user privacy.” It also notes that a distinction would need to be made between malicious inauthentic accounts and those that are “clearly set up for satire.”
The most intriguing idea, when it comes to enforcement, is the potential for making companies legally liable for “defamation, invasion of privacy, false light, and public disclosure of private facts,” which could lead to millions in judgments against these companies. While a “duty to identify” would likely only act as a nudge—and one that might not be entirely necessary given that Facebook and Twitter are now publicizing the mass deletion of fake accounts and the discovery of political influence campaigns—liability would go much further.
The focus on disinformation, though important, is also backward-looking and may focus too strongly on the ways in which platforms were weaponized in 2016. Disinformation campaigns are clearly changing and growing more sophisticated. Facebook announced it had discovered one on Tuesday afternoon but was unable to determine its source, which stands in contrast to the Keystone Cops nature of Russia’s 2016 activities, when it paid for advertising campaigns in roubles. That said, it’s not entirely clear what forward-looking regulation on this front would look like, though the 2018 midterm elections will likely give us a glimpse into the ways in which influence campaigns have evolved.
Aside from the threat to open up tech companies to lawsuits, the most sweeping proposals are ones that have been bandied about for some time: the adoption of privacy regulations similar to those that went into effect in the European Union earlier this year, and the passage of the Honest Ads Act, which would regulate online political advertising. Giving consumers control over their data, a la the EU’s recently enacted General Data Protection Regulation, would do much to prevent Cambridge Analytica–style abuses. The Warner proposal would force tech companies to acquire a consumer’s “informed consent” before collecting their data and would require companies to notify the public within 72 hours if a breach were to occur. The Honest Ads Act, meanwhile, would require that political advertisements on social media be subject to the same disclosures as those on television and radio.
The easiest way to achieve many of the proposals in the Warner paper would be to expand the authority of the Federal Trade Commission—which itself is one of the 20 ideas listed. The FTC lacks the “general rulemaking authority” to do very much when it comes to data protection or privacy. But that’s unlikely to change in the near term, the paper notes, as Republicans have blocked efforts to expand the FTC’s power.
That raises the biggest question with these proposals: whether they stand a chance of ever becoming law. An omnibus bill encompassing many different regulations, including those proposed by Warner’s office, would be ideal. In one fell swoop, it would bring the law up to speed with the digital age, while also implementing a holistic approach to tech regulation. But given the dynamics in Congress today, passing even just one simple regulation is challenging enough. Large-scale action seems impossibly fraught—too politically risky, and too legislatively complex.
And yet, these hurdles are a major reason why Congress hasn’t done anything concrete about big tech (the other major reason being the companies’ growing lobbying efforts). That’s why Warner’s office put out this policy blueprint. “The hope is that the ideas enclosed here,” the introduction reads, “stir the pot and spark a wider discussion—among policymakers, stakeholders, and civil-society groups—on the appropriate trajectory of technology policy in the coming years.” More likely, the paper will be debated by those who already care about the issue, and ignored by most of Capitol Hill. If the Russian influence campaign of 2016 and the Cambridge Analytica scandal led only to Zuckerberg’s wrist-slapping, one wonders what kind of calamity it will take to get Congress to deal with big tech.