Tech

Here’s How to Fix Facebook’s Fake News

Truth Shredding

Transparent algorithms and unfiltered newsfeeds may not be enough to fix the digital delivery systems that have made fake news a fixture. What’s needed is a software overhaul.

articles/2016/11/19/here-s-how-to-fix-facebook-s-fake-news/161118-Luckett-fakebook-news-tease_rasyqy
Photo Illustration by Sarah Rogers/The Daily Beast

For those of us who (despite a big blow to our confidence) still believe in the power of ideas to generate good, Facebook’s fake news problem should be a lightning rod.

Nobody wins in a system that rewards lying. If stories stating that Pope Francis is endorsing this or that candidate get more readership than legitimate news while countless other bogus articles achieve trending status, we have a real problem. We need a level playing field for free and open debate in which a general adherence to truthfulness can safely be assumed. Disinformation is a scourge on our democracy.

While we can’t save people from their own stupidity, we shouldn’t tolerate a system that gives an incentive to Macedonian teenagers to make up BS stories that entice clicks from gullible Americans. So, we must redesign the underlying architecture of social media. And for that, we need forward-thinking software engineers and socially responsible entrepreneurs, people willing and able to disrupt the insidiously destructive, self-serving profit models of social media companies like Facebook.

ADVERTISEMENT

First, let’s talk about what not to do. Fixing the fake news problem—as well as the hate speech problem and the echo chamber problem—cannot be achieved with centralized censorship, however appealing it might seem to anyone who’s been abused by racist trolls. No one person in history has accumulated more undetectable control over our communication fabric than Facebook CEO Mark Zuckerberg. So when he vows to weed out fake news via a non-public methodology, we should worry deeply about a single institution holding such power to unilaterally decide what constitutes the truth. Nor should we impose draconian rules requiring potentially vulnerable people to identify themselves online.

Freedom of expression and privacy are not only human rights, they are vital for the health of what we call the “social organism.” The culture within this new, leaderless, digitally interconnected system for sharing information must be allowed to evolve, naturally, on its own terms. And, just like the human immune system, that evolving online culture needs exposure to all ideas, both good and bad, if it is to develop well-entrenched norms of tolerance and civility and adopt the right defenses when those norms are crossed. Until we do, fake news will continue to enter our system like cancers that trick our cells into believing they are harmless, feeding on the very nutrients and metabolism that are supposed to keep the organism healthy.

So, if we can’t censor and force people to come out from behind their pseudonyms, how can we build a model that approaches the trifecta of free speech, privacy, and honesty? Dedicated fact-checking services such as Snopes and Politifact can help people discern the difference between truth and lies, but for true reform we must attack the incentive structures and economic models that feed these problems. We need to bring transparency and accountability right into the core of the system, into the software programs that govern how information is shared over these networks. That way we can conceive of a social media universe where people are free to lie, if they so wish, but required to pay a price for systematic lying.

First, we must demand transparency in the algorithms with which Facebook and other companies curate our newsfeeds. The software should be open to scrutiny and stripped of the secretive distortions with which it creates captive pools of like-minded users. The way Facebook’s software has evolved, it is now relentlessly, iteratively steering human beings into ideology camps, reinforcing groupthink and building an uncompromising liberal-versus-conservative divide.

Let’s be clear: This is no accident; this is a business model. Algorithmic curating allows social media platforms to deliver clearly defined, niche markets—Facebook’s ad marketers call them “look-alike audiences”—to advertisers who pay a “boost” fee to gain prominent placement with those targeted feeds. This is especially appealing to political propagandists, that special breed of “advertiser.” They can use this system to tap bands of enthusiastic supporters who will, at no charge, dutifully “like” and “share” posts that appeal to their worldviews, regardless how truthful they are. Facebook’s echo chambers are an incredibly cost-effective means of propagating disinformation.

By contrast, Facebook’s secretive algorithm has delivered a bait and switch to the trusted media brands it calls “partners.” With no control over how the algorithm functions, they are forced to keep paying boost fees to gain the audience reach they need. Anyone who has managed a Facebook page knows the cat and mouse game with Facebook’s obscure algorithm. It is not an organic system. The upshot is that the public trust that these outlets have built up through decades of investment in serious journalism, the very essence of their brand, is being systematically devalued in the interests of Facebook’s business model. In effect, it brings The New York Times down to the same level as those kids in Macedonia. This, people, is a serious societal problem.

Here’s the thing, though. Facebook’s core business of creating and delivering look-alike audiences would become much harder to carry out in this manipulative way if its algorithm were open for all to see. Of course, exposing Facebook’s “magic sauce” won’t sit well with the company’s shareholders. But transparency as a principle is well appreciated among the broader coding community of Silicon Valley, where the open-source software movement attracts a quasi-religious commitment from many developers. It’s time we demanded it.

Ideally, all social media would be delivered over fully open, non-curated feeds where users, not the platform owners, choose who to follow and what to read. Twitter approximates that fire-hose approach, which may be why it has been somewhat spared from the fake news controversy. But Twitter is not immune to disinformation, hate, and tribal biases either, not to mention a prevalence of fake accounts, fake follower lists, and fake re-tweeting “bots.” This tells us that transparent algorithms and unfiltered newsfeeds might not be enough. To fully tackle these problems, social media must be subjected to a major software re-design.

We must aim to align the incentives that drive content-providers’ publication strategies with the public’s interest in free and open debate. Fake news and fake networks go against that interest. So how might we dismantle the incentives for their production? How do we make it unattractively expensive to build armies of remunerated content-generators and distributors, be they software bots or Bangladeshi workers paid to “like” Facebook pages?

A solution may lie in the system that runs the digital currency bitcoin, whose security model is aimed at the very problem of keeping all actors in the system honest while preserving anonymity and resisting censorship. Bitcoin’s core software forces the computers in its network to undertake an otherwise pointless “proof of work” exercise before they can validate transactions in the “blockchain,” a distributed ledger that’s ostensibly free from manipulation or censorship. That computation task uses up electricity, which means that while it’s relatively cheap to run just one computer node on the network, it quickly gets very costly if you want to run thousands of them. This makes it prohibitively expensive for rogue users to deploy enough computing power to overtake more than 50 percent of the network, the threshold needed to override the ledger and validate fake or fraudulent transactions, even if they act anonymously. The design of a more honest social media news platform could benefit from approaches inspired by this proof-of-work model. Concepts like this could make it proportionately more expensive to build teams of bots or low-paid workers into fake networks of influence that carry out automated replication.

Critically, we’d need the software to distinguish between “fake,” manufactured networks and those composed of people who honestly and independently choose to follow a content provider and share their work. That may not be as hard as it seems. Big data-based network analysis, which maps how connections are forged, can easily identify groups that are formed organically and contrast them with those that are artificially constructed by a controlling entity. Incorporating these features into the software through some AI-like, machine-learning mechanism could ensure that the computing “tax” is levied against unfavorable, centrally managed networks and not on decentralized human networks that are formed naturally around ideas. Here too the transparency of the algorithm will be critical, so that software engineers, journalists, and lawyers can audit the platform managers and ensure they aren’t abusing this power to separate “good” networks from “bad.”

These are loosely formed ideas, but we believe the logic contained within them could help us overcome this dilemma. By shifting the path of least resistance for information, they could change the flow dynamics of the social organism, creating a more organic process of network growth—a fairer system, in other words. Models like these don’t rely on a censor-in-chief to decide on the truthfulness of every message but instead objectively assess how those messages are distributed and then prices the user’s access accordingly. This focus on distribution is important because it centers on the core aspect of what makes social media such a profound departure from traditional media—why it is, in our mind, the most dramatic change in mass communication since Gutenberg’s Bible.

With old media companies, distribution models relied on physical infrastructure such as printing presses, television towers, or cable networks. Now, the assets needed to get a message out are far more intangible. Distribution in social media depends on the willingness of individual nodes in a decentralized network—either people or bots—to share content with others. In this new system, with its clear biological parallels, the originator of a message must convince those nodes to share the information with others. If this system functions fairly, actions taken to spread words, images, or videos will be based on their intellectual or emotional appeal. But as social media is currently designed, content providers can more easily buy software modules or hire contractors to spread messages whose inherent appeal and truthfulness is irrelevant to that equation. Clearly, society’s interest lie with the first model, not the second. We want people to build networks of influence with the weight of their ideas, not buy them with the weight of their bank accounts.

Any redesigned platform that follows these principles would go head to head with the business models of the incumbent social media companies, especially Facebook. So, who would take them on? What about a consortium of traditional media companies? After all, they are the most threatened by this fake news phenomenon, since it directly undermines their business model, in which capital is dedicated toward doing the serious journalism behind trustworthy stories.

These 20th-century institutions can’t wind back the clock to a world where they control news distribution. For better or worse, social media is here to stay. But they do have a strong interest in backing an open, trusted platform in which ideas compete on a more level playing field. Given record-low levels of public confidence in traditional media, transparency around the algorithm will be paramount in this scenario. Anti-trust regulators would also demand it. But assuring transparency in this way would also be an opportunity for these providers to regain the trust of their readers and viewers.

We want these ideas to spur serious societal discussion. They offer a chance to reassess our understanding of what we mean by “free speech” in the social media era. In particular, the fake news problem, at least as we’ve framed it, could reopen debate around the conservative argument that “money is speech.” For its wanton neglect of the problem of unequal access, this position has always been thinly grounded—long before a Republican-stacked Supreme Court ratified it in the controversial Citizens United decision. But its shallowness is now in especially stark relief. Would it really be a breach of human rights to tax funds that are spent not so much on producing or promoting a message but on a machine-like network for systematically distributing deliberately deceptive information?

From a legal perspective, privately owned platforms will rightly argue that they aren’t bound by the First Amendment on such matters. But in an age when these companies have garnered gatekeeper powers that surpass those of many governments, we have a critical interest in how they execute those powers. The good news is that we, the users of their product, have powers, too. Without our clicks, they are worthless.

Oliver Luckett and Michael J. Casey are the authors of The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life.

Oliver Luckett is a technology entrepreneur and currently CEO of ReviloPark, a global culture accelerator. He has served as Head of Innovation at the Walt Disney Company and co-founder of video sharing platform Revver. As CEO of theAudience, Luckett worked with clients such as Obama for America, Coachella, Pixar, and American Express. He has helped managed the digital personae of hundreds of celebrities and brands, including Star Wars, The Chainsmokers, Steve Aoki, and Toy Story 3.

Michael Casey is a writer and researcher in the fields of economics, culture, and information technology. He is the author of three critically-acclaimed books: The Age of Cryptocurrency (2015), The Unfair Trade (2012), and Che’s Afterlife (2009). In a two-decade career as a journalist, much of that spent as a reporter, editor, and columnist at The Wall Street Journal, he wrote extensively on global economics and finance. In 2015 Casey become a Senior Advisor at MIT Media Lab’s new Digital Currency Initiative.

Got a tip? Send it to The Daily Beast here.