We face a new menace to our national security—one that is as grave and pernicious as any we have seen in the past. But this one may prove much more difficult to contain.
Manifold threats are emerging across the information landscape on which we live, where we work and where we make fundamental political choices about who we are and what we stand for. These are threats made more challenging because so few fully understand them, because the government and the electorate are both so ill-equipped to address them, and because containing them will require us to make choices with profound philosophical consequences about the future of the social contract.
Everywhere you look in the past few weeks, the growing risks we face have been made clearer. CIA Director Bill Burns has warned of the potential for TikTok’s Chinese owners to use the app to access data that they could use to threaten our national security. The scandals shaking cryptocurrencies reveal that the world of digital finance presents unique opportunities for scam artists to debunk a gullible and ignorant public. Tech moguls have emerged as the robber barons of our age, wielding unprecedented power, controlling without constraint vast swaths of the marketplace and the means by which we connect and function as a society.
ADVERTISEMENT
The case of Elon Musk stands out. While his performance as Twitter’s new CEO and owner has been marked by darkly comic narcissism and ineptitude, it has also had much more serious elements.
Musk, who has in the past revealed an affinity to and relationship with Vladimir Putin, has taken stances that have ranged from mouthing Kremlin proposals to end the war with Ukraine to (temporarily) halting Starlink services to Ukrainians. (While the service was restored, it illustrated the kind of impact the mercurial policies of a tech mogul can have on sensitive security concerns.)
Musk has reintroduced right-wing voices—those of racists and neo-Nazis who in the past have attacked democracies in America and elsewhere in the world—to Twitter. He has opened the floodgates to more disinformation on the site. He has publicly used the site to question the undeniable value of vaccines to a public threatened by infectious disease and to minimize violence against senior U.S. government officials while promoting unfounded conspiracy theories. He has accepted money for the project from foreign sources that likely saw Twitter as a useful tool of opposition to their policies, calling his and their motives for his self-destructive management of the site into question. And he has done all this while remaining pliant to the concerns of the government of the People’s Republic of China.
Neither the Musk story nor the others cited above are simple tales of the marketplace at work. They are signs of abuse by foreign actors (and the super-empowered) that put at risk financial markets, our privacy, our democracy, individual lives, public health, our national security, and that of other nations.
They are not the only instances in which developments in the digital world have done so. We saw this as sites like Facebook were exploited by the Russians in their efforts to interfere with America’s 2016 elections. We saw it in as web-based platforms supplanted America’s media infrastructure, and became the primary source of news and information for many—as well as the primary recipient of advertising dollars, without being constrained by the regulations that for decades served as guardrails for media organizations in past eras (from their immunity to liability for third-party content to ownership rules).
Foreign actors can use social media (and other new media) to influence domestic debate, attack groups they see as a threat to their interests and promote unrest and division. They can use new digital tools to steal everything from data to intellectual property to financial reserves themselves. Digital investors with dubious allegiances can deny vital services to those in need worldwide, or establish rules for their enterprises that advance dangerous political agendas. Domestic extremists can pose a similar threat to those from overseas. And new technologies dramatically raise the likelihood of even more grievous threats.
AI chatbots could flood social media sites with millions of algorithmic voices advancing a single agenda or seeking to drown out the voices of real people or holders of a particular political view. AI-empowered trading schemes will give even greater advantages in computer-driven financial markets to the super-wealthy, and exacerbate social inequality.
AI is a tool that can give a strategic advantage to combatants or be a force-multiplier for terrorists, its development and distribution still largely left to market forces (or constrained by narrow, country-specific programs to limit its development—like those of the U.S. with regard to China). Deep fake technology can make it all but impossible to know the difference between what is happening in the world, and what individual people may be saying or may believe.
To date these threats have been addressed piecemeal, when they have been addressed at all. Our failure to anticipate them is a sign of the inadequacy of our government to provide useful oversight, as is our unwillingness to properly regulate Big Tech and challenge the highly concentrated power of a few organizations and individuals.
As we seek to find an appropriate response to the emergence of inter-related domestic and international threats, we are stymied by the institutional gaps we have in addressing these issues. We’re also hamstrung by our history of seeking unilateral or bilateral agreements to address these issues—rather than the multilateral ground rules to regulate the internet, digital finance, and trade in sensitive technologies that we need.
All this requires a new approach to national security thinking. We need a National Information Security Strategy and the mechanisms at the policy and working levels of our government to develop, oversee, and implement it.
We should be cognizant of several hurdles that exist to achieving this goal.
Too few in our government, especially on the legislative side, have sufficient understanding of these next-generation technology-related issues to handle them. In the U.S., our corrupt form of political campaign finance gives disproportionate clout to tech moguls, which in turn compounds the already great clout they often have from the platforms and digital resources they control.
Existing institutional rivalries within the government (and between governments) will make collaboration difficult on critical questions. And perhaps most daunting of all, protecting fundamental rights like free speech and expression will be, from time to time, difficult to balance with protecting against threats such as disinformation, the rise of groups that pose a threat to our well-being, and foreign initiatives to undermine our security.
That said, we have faced and managed such challenges in the past with each new era of information technology—from media ownership rules to the fairness doctrine. We can meet this challenge again.
Indeed, we must. Absent constraints on malevolent actors, protections that ensure safe commerce, the ability to identify and manage threats to our information security, we will put at risk our fundamental freedoms and values in ways that today’s headlines should already be making painfully clear to all.