Opinion

Believe Me, Mark Zuckerberg Isn’t Going to Police Himself

HOT POTATO
opinion
210505-facebook-tease_zbvilt
Chip Somodevilla/Getty

The Oversight Board process isn’t about Donald Trump’s free speech. It’s about Facebook’s power.

Facebook hired me in 2018 to help clean up their political advertising mess after the Cambridge Analytica scandal exposed them, yet again, to public scrutiny over their negligence in protecting democratic elections around the world. I was asked to build and head a new team responsible for creating and operationalizing policies to ensure that “bad actors” did not misuse Facebook’s advertising to manipulate elections. But it became clear that Facebook leadership wasn’t interested in any of my ideas, including a necessary level of fact-checking for lies about voting, if they threatened their relationships with political parties in power.

At the time, I was bogged down with explanations of why these ideas wouldn’t scale globally, how they would require too many resources, why they were wrong for the company. Looking back on it now, it is clear that my proposals all rubbed up against a decision that Facebook leadership had already made before hiring me, even if they hadn’t announced it yet—that they would not fact-check the person who had the most power to possibly regulate them: Donald Trump. In America and across the world, preserving their power trumped protecting democracy. My ideas were shut down, and I was cut out of high-level meetings ahead of the 2018 U.S. midterm election. Even when my team built a plan to ensure we would not allow ads that engaged in voter suppression (more details in this article), I was told it was not a priority. Six months after arriving, I left.

Facebook have proven again and again that they cannot be trusted to self-regulate when doing so would either harm their business model or put them in the crossfire of the regimes in any given country. The company has never applied its policies evenly, while hiding behind the false notion of valuing free speech above all else. Creating a so-called Oversight Board and kicking it their biggest headache—the Trump decision—did nothing to change that.

Yes, I believe the board made the right choice in upholding Facebook’s decision to indefinitely suspend Trump from posting content after Jan 6. Trump violated Facebook’s policies over and over again, ultimately inciting followers to launch an insurrection and then praising them for it (something which, by the way, I and many others had warned for months could happen). However, the board then punted back to Facebook, insisting on a “proportionate response.” In essence, they gave Facebook the option to re-instate Trump within six months, putting the ball back in Facebook’s court.

In theory, the idea of an oversight board is an interesting experiment. But in practice, it is being used by Facebook to deflect responsibility from the very role it chose for itself: steward of the public conversation.

And by focusing on the oversight board’s decision, we are feeding into exactly what Facebook wants; we are diverting our time, attention and energy away from the important discussion about how to hold the company accountable for how their own tools, design, and business decisions helped spread dangerous conspiracy theories and lies about the election, undermined trust in our democratic institutions, and ultimately contributed to the planning and execution of an insurrection.

The problem is, in part, that Facebook let Trump violate its policies and stoke division and distrust for so long, with barely any consequence, that now we’re asking this huge question about whether they were right to de-platform him, as opposed to asking why they allowed him to violate their policies for so long to begin with. Had they held him to the same standard as the rest of us, we might not have ever made it to this dramatic moment. I have repeatedly argued, in fact, that he should have been held to a higher standard, as he had a far larger potential to cause harm than the average Facebook user.

I stand by my belief that Zuckerberg’s no-fact-checking-politicians policy and the so-called “newsworthiness exception” that grew out of that are two of the most dangerous elections-related decisions the company has made. The exemptions given to the powerful at the expense of the rest of us, coupled with a business optimized for frictionless virality, were decisions that harmed democracy.

Facebook leadership has made intentional business decisions, again and again, in the U.S. and around the world, to protect those in power, those with the ability to hurt their business.

By refusing to fact-check politicians while also providing them sophisticated tools to grow their audiences, make their content go viral without any friction, and target vulnerable populations, Facebook tilted the scales to push more people to believe in these conspiracy theories about the election. Trump used the platform to lie about the election, to spread hate, conspiracy theories, and lies. We saw the ultimate result of that negligence on Jan. 6.

We need an investigation into every element of the insurrection—including Facebook’s possible role in aiding and abetting in the planning and execution of that event—by our democratically elected government or a public body with full access to the necessary data; not by Facebook itself nor a Facebook-appointed board.

We must get to the real questions about the systemic issues that make the platform ripe for disinformation, conspiracy theories, radicalization and hate to begin with. That would require a true audit of whether the company’s business model has contributed to much of these issues—something a number of U.S. senators pushed Facebook VP Monika Bickert to answer in an April 27 hearing, to no avail—and of the company’s recommendation engines, algorithmic amplification of conspiracy theories and lies, and inability (or intentional refusal) to enforce its own rules against dangerous content.

These are all things that are outside of the Oversight Board’s purview, although I do at least applaud its effort to touch on some of these issues in its public statement.

This is why I am concerned that sending the Trump case to the board set a precedent where the responsibility for how Facebook operates, the policy decisions they make, the rules they choose to write, and how they design their products is all being wrapped up into this content moderation question. Content moderation alone will never fix this. Even today’s decision made that clear—the board asked Facebook direct questions about all of my above points, a number of which Facebook refused to answer.

I absolutely understand the concern that a platform suspending a world leader sets a potentially dangerous precedent for global speech, but I do not take any more comfort in the idea that an unelected group of individuals, chosen by Facebook and in no way accountable to the public, has the final say in something this consequential, no matter how impressive the board members might be.

And therein lies the larger problem: One company intentionally and recklessly scaled to dominate the global public conversation, but they do not want to bear responsibility as the stewards of those democratic spaces. They want to write their own rules and apply them in a politically convenient way. Even the board pointed that out in its decision, writing: “In applying a vague, standardless penalty and then referring this case to the board to resolve, Facebook seeks to avoid its responsibilities.”

I agree with a number of the board’s recommendations, but let’s be clear, they are recommendations many of us have been making for years, starting with the idea that those with the biggest ability to cause harm should actually be more scrutinized, not less.

There is no easy, quick legislative solution that will create a healthy information ecosystem that helps democracy thrive. This will require looking at the entire ecosystem, including traditional media, cable news, media literacy programs, and civic education. But the status quo cannot be allowed to continue. Facebook cannot be trusted to police itself.

As I’ve written about before, and many have argued before me, our lawmakers need to focus on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.

Until democratic governments demand actual transparency and oversight over the company’s tools—again, not over the actual speech, but over what the company chooses to do with that speech—then we will continue to scream into the wind and give legitimacy to an unelected, unaccountable so-called “oversight board” that now seems to wield more power over the future of democracy than we, or any of the leaders we voted for, do.

Got a tip? Send it to The Daily Beast here.