These early months of 2023 have been a busy time for sputtered protests and crows of vindication in longstanding COVID debates.
New federal support emerged for the lab leak theory of the pandemic’s origin. A major research analysis cast serious doubt on the efficacy of mandating mask use among the general public. And a Lancet study reported that natural immunity is as good or better than completing a two-shot vaccine schedule.
Meanwhile, in California, a federal judge in January blocked enforcement of a new law, A.B. 2098, which would allow the state’s medical board to punish doctors—to the point of taking away their medical licenses—for “the dissemination of misinformation or disinformation” about COVID-19 in conversations with patients.
ADVERTISEMENT
A primary legal sticking point is the state’s definition of misinformation: “false information that is contradicted by contemporary scientific consensus.” As the judge observed, this language is vague at best. Arguably, it makes the government the arbiter of acceptable opinion, a gross abuse of constitutionally protected free speech.
It’s an effort in old-school censorship, by which I mean top-down, heavy-handed efforts to control what information and perspectives the public is allowed to access. And that’s exactly why, in 2023 America, a measure like this is worse than useless: You can’t manage 21st century information flow with 20th century constraints.
Blackouts don’t work like they once did. Journalists don’t collaborate with officials and sit on big stories as in times past—and if they do, they typically get scooped. And, most importantly, the general public’s access to (and therefore perception of) the scientific establishment and other institutional authorities has fundamentally changed in the last three decades.
Old-school censorship will only make our “misinformation crisis” worse because it can’t effectively limit public access to the forbidden data and ideas. All it can do is ensure the public receives them from fringier sources, while learning to trust even the most trustworthy experts and institutions a little bit less.
Imagine if the COVID-19 pandemic happened a hundred years ago—or any time in the pre-internet age. Research would have proceeded far more slowly, and scientific consensus on how to handle the virus likewise would’ve been even slower to emerge. Three years might feel like a long time to be figuring out viral origins and mask policy and comparative immunity, but from a historical perspective this has all moved with enviable speed.
And scientific progress isn’t the only thing that would be different in a pre-internet COVID scenario. The difference in information access would be just as important for the public experience of the pandemic. Scientific debates (like the three prominent ones in the recent news cycle) would’ve happened not quite behind closed doors, but certainly behind an access barrier probably 99 percent of people would never breach.
I mean, think about it: How would the average member of the public read just-published academic research in 1993? Or stay up to date on federal intelligence reports on viral origins in 1953? Or track daily national, state, and local infection rates in 1923?
The answer is: They generally wouldn’t. (And couldn’t.)
Back then, as Martin Gurri writes in The Revolt of the Public and the Crisis of Authority in the New Millennium, experts and other institutional elites could “believe themselves to be unquestioned masters of their special domain [because] so they were for many years. From the middle of the nineteenth to the end of the twentieth centuries, the public lacked the means to question, much less contradict, authoritative judgments derived from monopolies of information.”
But those monopolies have been broken. All that information is easily accessible to just about anyone, and so is the whole consensus-making process. The public sees not just the final product but also all the mess along the way.
Now, we can’t help but notice the “gap between the institutions’ claims of competence and their actual performance,” as Gurri observes. “Today failure happens out in the open, where everyone can see. With the arrival of the global information sphere, each failure is captured, reproduced, multiplied, amplified, and made to stand for authority as a whole.” In this context, it’s become all too easy to conflate a (good) process of self-correction on the way to discovery of truth with a (bad) narrative revision as failures and noble lies are exposed.
In the resulting chaos and loss of institutional trust, censorship attempts like A.B. 2098 (and other government efforts to steer or outright stop pandemic-related information flow) make even that innocent and productive self-correction look like deceit. These efforts aren’t just ineffective but counterproductive.
Gagging doctors in the exam room won’t return us to an earlier era of institutional prestige in which, in Gurri’s phrase, “the pratfalls of authority [can be] managed discreetly, camouflaged by the mystique of the expert at the top of his game.” Rather, the public can (and with time, will) get all the same content from less careful and well-informed sources than their doctors.
In the name of protecting the public from “misinformation,” authorities and institutions lose the public’s trust, and will drive readers to even more untrustworthy sources.
The old information flow is permanently altered, no longer a large, single stream but a delta. Set aside the judgment of whether this is a good or bad development. It just is, and the delta is growing. Forcibly damming a single branch is a fool’s errand.