If you only read The Atlantic to get your tech news, you’d probably be under the impression that social media is a Leviathan on an inexorable path to devouring democracy.
Headlines scream that Facebook is a “Doomsday Machine” and an autocratic “hostile foreign power” that has made American life “uniquely stupid.” A recent Atlantic headline on a Jonathan Haidt article said it plainly: “Yes, Social Media Really Is Undermining Democracy.”
Whatever the magazine’s editorial stance, these claims are not empirically grounded, and it’s unlikely they’ll stop being made any time soon. Scary narratives have a way of spreading and taking hold in ways that “we don’t know yet” wouldn’t.
ADVERTISEMENT
Professor Haidt, a social psychologist at New York University, notes in The Atlantic that “conflicting studies are common in social-science research.” And yet, his “undermining democracy” essay downplayed the breadth of evidence and studies critiquing the concept of—to cite one example—filter bubbles.
A group of researchers at the University of Amsterdam asked if we should worry about filter bubbles, and after reviewing the empirical evidence in 2016, answered, “No.”
Professor Axel Bruns, a media scholar at the Queensland University of Technology and the author of Are Filter Bubbles Real?, reviewed the data that supported the concepts of echo chambers and filter bubbles, and also the data that did not. He concluded the concepts are based on a “flimsy foundation” of primarily anecdotal evidence. Bruns had a valid point: The focus on those theories prevents us from properly confronting the deeper causes of division in both politics and society.
In an in-depth response to Haidt’s Atlantic piece, New Yorker writer Gideon Lewis-Kraus interviewed researchers who admitted that there’s a lot less scientific consensus on the positive or negative impacts of social media than many people think. “Research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks—in the original use of the term—are rarely heterogeneous,” Lewis-Kraus wrote, adding that Haidt later told him that he no longer thinks echo chambers are “as widespread a problem as he’d once imagined.”
More views, more hostility
If we’re exposed to more views, it raises a different issue. According to Professor Michael Bang Petersen, a political scientist at Aarhus University, that’s where a lot of the felt hostility of social media comes from—not because the sites make us behave differently, but because they are exposing us to a lot of things we wouldn’t normally encounter in our everyday lives.
While media and activists have obsessed over misinformation and disinformation on social media since the 2016 U.S. presidential election, researchers from Harvard University analyzed both mainstream and social media coverage of the election and concluded that: “The wave of attention to fake news is grounded in a real phenomenon, but at least in the 2016 election, it seems to have played a relatively small role in the overall scheme of things.”
Accompanying the debate is a public Google Doc—curated by Haidt and Duke University sociology and public policy professor Chris Bail—titled “Social Media and Political Dysfunction: A Collaborative Review.” It consists of studies that found negative influence of social media on democracy, and also other studies that conclude it’s either not the case or is inconclusive.
Bail has pointed out that the number of people exposed to fake news is pretty low—only 2 percent of Twitter users routinely see fake news. More importantly, they don’t believe what they read when they do see it.
But it is Haidt—“Social media may not be the primary cause of polarization, but it is an important cause,” he argued—who repeatedly overplays the role of social media in society. His entire thesis focuses on the 2010s, the decade when social media became practically ubiquitous—and since polarization grew as well during that period, Haidt makes sweeping negative generalizations. A lot of them just don’t hold up to scrutiny.
A study that examined how political blogs interacted with each other in the run-up to the 2004 U.S. presidential election, found a highly segregated blogosphere: “Liberals and conservatives linked primarily within their separate communities, with far fewer cross-links exchanged between them. This division extended into their discussions, with liberal and conservative blogs focusing on different news articles, topics, and political figures.” No wonder this study was titled “divided we blog.” Its visualization was striking, showing a red blob and a blue blob, with very little overlap at all.
Then there’s the concern over “rabbit holes”—where algorithms supposedly take normal everyday people and turn them into radical extremists. Professor Brendan Nyhan, a political scientist at Dartmouth College, found that aside from “some anecdotal evidence to suggest that this does happen,” the more frequent and bigger problem is that people “deliberately seek out vile content” via subscriptions, not via recommendation algorithms. That means they don’t fall into the radicalization rabbit hole, they choose it.
This is especially troubling on the fringes, where already radicalized individuals find extremist content that reinforces their predispositions. The focus should be on this small segment of the population. It's not your average person, but it could be dangerous.
Taken as a whole, many of the prevailing narratives about social media (e.g., filter bubbles, echo chambers, fake news, algorithmic radicalization) are, simply, poorly founded. Correlational research cannot decipher which direction the effects of interest are going.
It’s the mistaking of correlation for causation that made these narratives so popular.
The Era of Techlash
As the research is inchoate and ongoing, “it’s difficult to say anything on the topic with absolute certainty,” concluded Bail. But we continue to see headlines with absolute certainties, as if the threat posed by social media is an undeniable fact. It’s not. And while scientists are still raising question marks, the media continues to raise exclamation marks.
The tech backlash against social media has been rapidly strengthening since 2017 (the year Trump took office). There’s a visible disconnect between the empirical evidence and overwrought declarations of doom amplified by the media (which keep overstating the harms). As the Techlash progresses, this gap widens.
This is where the escalating rhetoric from tech’s strident critics actually comes in handy.
Haidt’s theme, for example, is destruction: “America’s tech companies” as “destroyers” who have “created products that now appear to be corrosive to democracy” and “brought us to the verge of self-destruction.” It fits the nature of media (way before social media): Hearing that we’re all doomed is much more interesting than a nuanced discussion.
The news media often covers social media through this “Techlash Filter.” While Instagram filters make their subjects look shiny and pretty, the “Techlash filter” uses hyperbolic metaphors (like the “Doomsday Machine”) to make social media look scarier than it is.
In general, news media coverage defines the topics for discussion (“what to think about”) and the framing of the topics (“how to think about these issues”). For all the concern about echo chambers in social media, there’s a familiarity bias in news media—where journalists look at their colleagues’ work and cover the same topic from the same perspective.
This copycat behavior, though, often snowballs and creates misguided outrage, which in turn leads tech companies to simply ignore criticism (even when it’s valid) as uninformed cynicism. The exaggerated tech discourse plays into their hands. When fearmongering theories about the evils of social media are overblown, it manifests in the companies’ PR efforts.
Inside Facebook, for example, Nick Clegg told employees to “listen and learn from criticism when it is fair, and push back strongly when it is not.” Citing the “bad and polarizing” content on private messaging apps like Telegram, WhatsApp, and others, Clegg wrote: “None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way. We need to look at ourselves in the mirror and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”
Haidt accused Meta (Facebook’s parent company) of cherry-picking studies in its corporate blog response to one of recent Haidt’s Atlantic articles. But Haidt did the same thing in his magazine articles (for the opposite arguments). Contradictory findings mean the debate isn’t over, and there’s still so much we don’t know.
In a recent Techdirt opinion piece, I argued that journalists should “beware of overconfident techies bragging about their innovation capabilities AND overconfident critics accusing that innovation of atrocities.” Their readers should adopt such healthy skepticism as well.
There is a positive aspect of Techlash pressure. It makes the big tech companies think hard about placing safeguards in advance, ask the “what could possibly go wrong?” questions, and deploy resources to battle potential harms. That’s undoubtedly a good thing.
However, when lawmakers promote grandstanding bills to “fix” social media, the devil is always in the details, and we ought to think a lot harder about their formation. There are real costs when regulators waste their time on overly simplistic solutions based on inconclusive evidence.
One legislative battle worth fighting would be to remove the shroud around tech companies’ infamous secrecy. When platforms are perceived as black boxes, and relying more and more on recommendation algorithms, it’s easier to fear their business. Greater transparency is crucial here.
Independent researchers ought to be provided with more data from the big tech companies. That way, we could expand the discussion and give more room for a whole-of-society approach where experts offer informed arguments.
Haidt, in his Atlantic piece, acknowledged that “We should not expect social science to ‘settle’ the matter until the 2030s.”
If that’s the case, let’s avoid definitive headlines, because we don’t have anything closely resembling definitive proof.