On Wednesday afternoon Facebook founder and CEO Mark Zuckerberg published an open letter about privacy, his first public statement since it was reported that Cambridge Analytica had improperly used data from tens of millions of Facebook users during the 2016 election. In that letter—and in a series of interviews that went live that evening—Zuckerberg was in full-on damage control mode, both deflecting blame and reassuring users that their personal information is safe. But more than anything, he amplified the inherent contradiction between Facebook’s core functions: its social network and its advertising business.
Zuckerberg’s official statement was a mix of arrogance (“The good news is that the most important actions to prevent this from happening again today we have already taken years ago”) and defensiveness (“I’m serious about doing what it takes to protect our community”); on CNN, in contrast, he was clearly rattled, stumbling and sweating over straightforward, predictable questions. In both instances, Zuckerberg did the bare minimum, shying away from taking genuine responsibility while pouring most of the blame on Cambridge Analytica.
Yes, Zuckerberg said “sorry” some, but much of his language was passive: “This was a major breach of trust,” he told CNN. He admitted that he “let the community down” by not properly vetting Cambridge Analytica, even though Facebook had previously all but admitted it had done astonishingly little to vet app developers with access to millions of users’ personal data. And at no time did Zuckerberg reckon with the real issue, which is that Facebook’s whole business model is predicated on selling user data to advertisers and companies like Cambridge Analytica. The result was the announcement of a series of superficial measures aimed at smothering the scandal and winning back skeptical users at minimal cost.
Going forward, Facebook will conduct an audit of applications that had access to a significant amount of data before the company implemented stricter privacy policies in 2014, and will ban any companies who continued using that data. It will inform people if their data was misused by a company like Cambridge Analytica. It will cut off data for app developers if a user hasn’t used the app in three months, and will restrict what an app can review without a user’s consent to the bare essentials—the user’s name, photo, and email address. It will encourage its users to be more conscientious about how they’re sharing their information on the site. And it will start a bounty program to reward users who spot vulnerabilities.
But these steps fall short of serious accountability, such as an independent audit of the social network’s privacy protections. They also limit the scope of Facebook’s responsibility, shifting oversight to users themselves. Yes, there will be further limits placed on apps that use Facebook to collect data, but Facebook is also signaling that it wants users to self-regulate. This is a safe and cynical bet from the social network, which can then shield itself from future Cambridge Analytica-like scandals by claiming that it had provided users with the tools to become more literate about their data-sharing. Again, Facebook is skirting a core problem: that much of the public’s confusion surrounding privacy stems from Facebook itself, which presents itself as a benign baby photo website when it’s really a massive purveyor of personal data.
And while these moves amount to greater transparency—a word that has rarely applied to Facebook in its 14 years of existence—that ultimately doesn’t mean very much. That’s because Facebook can only do so much to rein the third-party activity that constitutes its primary source of revenue. Zuckerberg can put guardrails around that central function, but he knows as well as anyone that he can’t do very much to change it. Facebook is a publicly traded company, and its investors would be furious if it altered its business model. Zuckerberg has the majority of voting rights in the company—a position that could allow him to dramatically reframe the company if he so chose. But as CEO Zuckerberg has shown little of that kind of daring.
Facebook didn’t become one of the most powerful companies in the world by owning up to mistakes or opening itself up to outside scrutiny. That’s now happening anyway: Zuckerberg will likely be called before Congress, and some form of regulation may follow. (In interviews, Zuckerberg subtly dodged the question when asked if he would testify, saying that Facebook would send the person with “the most knowledge about what Congress is trying to learn.” He played similar games when discussing regulation, saying he was open to it in theory.) Those who are alarmed about Cambridge Analytica for political reasons may be won over by Zuckerberg’s paltry measures, but there’s not much here for those who are concerned about the company’s control over a vast trove of personal data.