Tech

Do Russian-Backed Bots Qualify for Free Speech?

Mr. Robot Goes to Washington

Hundreds of accounts created to sway an American campaign may not be people, but they were created by people. So, under the law, are bots speech?

171029-baer-robot-free-speech-hero_f2cjyi
Photo Illustration by Elizabeth Brockway/The Daily Beast

It has become increasingly clear that Russia used an expensive and protracted influence campaign during the United States’ 2016 election to stir chaos and help swing it towards the Kremlin’s candidate of choice. One important tool in their toolkit was the dissemination and amplification of their message through the strategic use of bots.

It goes something like this: a pretty, scantily clad woman has an account (or thousands of accounts. It’s pretty biblical, getting a person’s attention). The profile has statements like, “MAGA,” coupled with anti-immigrant, anti-Hillary, racist or white nationalist, or other divisive memes.

The bot makes repeated, often awkwardly phrased but strongly evocative statements intended to sow anger, insularity, and distrust, sometimes to the verge of violence.

ADVERTISEMENT

Bots, short for robots, are code compilations built to automate particular tasks. They are all around us, in our devices (like Siri and Alexa) and our platforms and applications (like Facebook and Twitter). We interact with them daily.

Bots are made by humans, for humans. But bots are not human.

Recently, social media giants Facebook and Twitter have come under fire for their role in recent Russian influence, including the 2016 campaign offensive. Facebook suspended the accounts of some 470 fakes this month, and Twitter announced this week that it will remove advertising from Russian news outlets “Russia Today” and “Sputnik.” But getting to the core issue is tangled.

And at the core is one complicated question: Are bots speech?

On the one hand, bots only exist if humans exert the effort to create them. Their creation, and their messaging, originates with a human. We see similar incarnations of questions in other realms: courts have entertained the question of whether code is speech, and the Supreme Court has held that tangible assets like money are speech. Courts recently entertained whether a macaque who took a “selfie” owns his photo.

On the other hand, bots are code skeletons. They are not alive, let alone human. They are comparable to a tunnel where a human voice echoes. And more specifically, in the case of the Twitter and Facebook bots that we are witnessing lately: they can be a national security threat.

Generally, First Amendment doctrine assumes that we have access to a robust “marketplace of ideas” through the freedom of speech. But what if some messengers are using bullhorns? And what if those creating the loudest message seek to undermine American independence and democracies across the world?

First Amendment Exceptions

The New York Times summarized Russian bot influence on Twitter and Facebook last week, saying they  “fired off identical messages seconds apart — and in the exact alphabetical order of their made-up names, according to the FireEye researchers. On Election Day, for instance, they found that one group of Twitter bots sent out the hashtag #WarAgainstDemocrats more than 1,700 times.”

The reason bots reverberate so effectively is both a product of the sheer volume of tweets, and the fact that one very real person who accepts a connection to the bot, hands the bot a microphone to send a message viewable to all of his very real connections.

As a legal matter, of course, Russian nationals do not receive constitutional protections that US citizens do. Moreover, Twitter and Facebook require users to comply with terms of service and have the wherewithal to delete or suspend profiles at their discretion.

But for the purposes of Twitter, Facebook, or US law enforcement decisions about censorship, we must assume that social media platforms do not always know the nationality of the originator of a bot, and/or that some bots may be funneled through American sympathizers.

Despite the First Amendment’s absolutist guarantee, there are a few areas of speech that are explicitly unprotected. In 1942, the Supreme Court in Chaplinsky helpfully provided a laundry list of these: “the lewd and obscene, the profane, the libelous, and the insulting or ‘fighting’ words—those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.”

Since then, we added the carve-out for cybercrime child pornography. In addition to criminal restrictions, there are also civil limits on speech, including defamation, restrictions on commercial speech, restrictions for paid speech, and restrictions on government employees.

Criminal restrictions generally fall within two categories: speech that is proscribed because it is low-value (obscenity, profanity, libel); and speech proscribed because of its danger (fighting words, hate speech, and yes, national security exceptions including terrorist speech, under a statute prohibiting “material support” to a “designated foreign terrorist organization”). These two seem to work in contrast-- speech is either worthless, or dangerous, but surely not both?

Academic Tim Wu recently argued that the First Amendment is “dead,” because bots and the use of Internet have made the cost of amplifying one’s voice so low through “troll armies” and “flooding” tactics, that we no longer hold people’s attention through speech.

If this were the case, then we wouldn’t need to worry about dangerous speech, or low-value speech either, for that matter.

In practice, we know social media to be enormously influential in swaying opinions, even while, as many recent studies demonstrate, the customizability of our social media feed means that we live in social media vacuums generally surrounded by like-minded people. The use of bots is relevant and powerful against this backdrop, which makes bots a tempting tool with which foreign actors can conduct interference.

Bot speech reinforces and radicalizes beliefs among those already predicated toward a certain socio-political leaning. It also distracts from real issues by introducing galvanizing, divisive issues like race baiting. A recent study by AI expert Matt Chessen points to the ways that machine learning enhances computational propaganda, providing “radically enhanced capabilities to manipulate human minds.”

To be sure, humans are not computers, and we participate in the active belief systems to which we ascribe. But confirmation bias (the desire to reinforce our beliefs with what we see in the world) is a powerful distraction in our interpretation of facts in the world.

Plus, bots’ ability to reverberate through the annals of social media by using algorithms to replicate bots and repeat bot speech, creates an insidious artificial and exponential noisiness that, to a human eye, may resemble consensus.

In other words, bots can make you think one unpopular candidate or idea is actually a lot more popular than any of them actually are—and that undue influence might change how you vote, or even behave in everyday life.

What Are Bots?

In the past, I’verepeatedly raised the point that code is created by humans. As such, it does not have a life of its own, but is a reflection of the humans who created it—hence, the need for diverse development teams, to minimize the biases that survive in code as flaws.

Court views on code are consistent with this: there is no uniform legal conclusion that code is speech. While code resembles words in that it is made of alphanumeric characters, “speech” within a First Amendment definition can be conceived more effectively as expression. Accordingly, courts have rejected Apple’s contention that writing breakable encryption would be “compelled speech” in the First Amendment sense.

It’s complex.

For one, We know that bots aren't human. Bots get fewer protections, and also get fewer rights. If you send a robot down the street, the police don't have to read it Miranda rights, and they also cannot arrest it or send it to prison.

We also know that code can be programmed to self-perpetuate--the cloudiness of the airwaves of Twitter is partly a result of that-- thus, bots are disengaged from human limitations of producing expression, in which time and resources are a function.

That being said, humans provide some initial programming. Moreover, this human factor is key to our concern: our current interest with bot propagation involves limiting Russian influence.

Russians are humans. We could also be talking about Americans that are turned, or have detrimental worldviews (like the Unabomber). If the guy who programs the bot is American, values-based decisions about our views on bots still stick.

I contend, at least in the context of the Facebook/Twitter Russian campaign, bots are not speech, nor do they engage in speech in the First Amendment sense. They are, at most, speech ricochets. They represent a form of technology that can be weaponized.

My conclusion doesn’t revolve around a literal definition of speech, but a legal one: words are not always speech in the First Amendment sense. Actions sometimes are. In my view, the incarnation of Facebook/Twitter campaign troll bots do not fall within First Amendment speech protection.

I base all of this on the architecture of bots as code skeletons, and the ways that they were weaponized in the recent campaign.

Context Matters.

It is important to remember that the First Amendment does not provide an affirmative right to say something. Rather, the First Amendment guarantees the absence of government interference in certain expression, the most centrally protected of which is political speech.

To the extent that bots require a more in-depth analysis than simply “not speech,” the First Amendment analysis revolves around context. (Content-based restrictions are rarer, fall under strict scrutiny, and are not likely to be implicated here.)

Because of the compelling nature of the First Amendment interest generally, context and motivation almost always matter when it comes to considering something within the scope of the First Amendment’s protections.

For example, speech can be regulated based on the situation and location in which it is spoken. Those are called time, place, and manner restrictions. Similarly, speech that has certain effects upon other individuals (especially private citizens) can be restricted. Such restrictions are demonstrated in the torts of defamation, invasion of privacy, and intentional infliction of emotional distress.

As I mentioned, the Court also generally excludes from First Amendment protection speech that is likely to lead to tangible harm—such as true threats, fighting words, and incitement to imminent lawless action. The Chaplinsky Court justified these restrictions on the grounds that “such utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”

There is much at stake in the current discussion about bots: the national security implications for allowing Russian bots on intimate social networks like Facebook and Twitter are evident, if difficult to pinpoint in an exact place and time.

In other times and places in American history, recognizing the danger of contextual realities, we have deemed certain speech unprotected because it created national security vulnerabilities. The Alien and Sedition Acts were signed into law by President John Adams in 1798. A portion of it, the Alien Enemies Act, is still on the books.

Justice Frankfurter wrote in 1951, “The right of a government to maintain its existence—self-preservation—is the most pervasive aspect of sovereignty.” Of course, here Frankfurter was affirming a conviction under the Smith Act—which authorized the executive detention of any individuals believed to have “propensity for espionage or sabotage”– for Eugene Dennis, general secretary of the Communist Party USA.

Historic laws like these serve a dual purpose: they are cautionary tales about falling prey to the particular fears of the day. They also serve as weighty precedent for considering national security context when assessing speech protections.

Where Do We Go From Here?

Today’s conversation is unique because of the elements of technology in dissemination (that’s bots, in this case) and platform (social media), and the particular dynamics of private sector as a public forum.

It is not clear what the appetite will be for Facebook, Twitter, or Google to unclog their platforms from bots in any meaningful way. The three tech behemoths will testify again next week before the Senate and House Intelligence Committees. This week, Twitter announced that it began hunting bots with bots: Quartz, @probabot_, is a Twitter bot that looks for bots -- but doesn’t do anything beyond IDing them.

I agree with a New York Times article this week that concluded that we must face the growing disillusionment in America that makes some of us such a ripe conduit for this kind of propaganda. Rather than revisit Cold War language, we should “work to systematically rebuild analytical skills across the American population and invest in the media to ensure that it is driven by truth, not clicks.”

Bots are one symptom of a systemic problem, and I do not mean to suggest the lack of free speech protection for bots as a one-step solution.

It’s worth noting that we don’t even witness conversations like this in countries like Russia and China, where speech is heavily regulated, government criticism is prohibited or rubbed out, and the Internet is accessed only through a firewall.

It is Americans’ tolerance for “speech”—in the broadest definition—that made it so difficult to combat the Russian influence campaign.

And it’s no coincidence that Russia took to a tool like bots. They don’t have a democratic voice to produce messaging. They need the propping up that automatons can provide. Bot armies play to their strengths.

For our part, we must do better in the future.

It’s frustrating that bots are hard to fight. They cross complex lines between private companies’ terms of service, and government-level national security objectives are at stake. It’s frustrating that America’s willingness to tolerate dissident speech, might have been the lynchpin in a Russian influence campaign.

As we look to the future: may we have complex conversations about rights and speech, and the duties of a government during peace and war; and may we not hamstring ourselves by conflating humans and robots. May we notice the originators of messages we consume, and remain mindful of the motivations and toolkits of those who seek to undermine us as a country. May we carry a skeptical lens into our individual online interactions. And may we recognize that we are strongest, as a military and a country, when we do not allow for the sowing of seeds of divisiveness.

Many Trump supporters seem to have perceived themselves as tough on national security and militantly pro-US armed forces, yet were being played by Russian bots. I’ve worked for the US military. The US military is, or seeks to be, a values-based place of interpersonal tolerance based on operational need: it is on the vanguard of racial and gender integration, a vehicle for individual advancement in our increasingly economically stratified society, and houses some of the most sophisticated research in alternative innovation.

Those who would seek to uphold our national security should not tolerate a bot-enabled influence campaign that played to our lowest impulses.

* The views expressed herein are the personal views of the author and do not necessarily represent the views of the FCC or the U.S. Government, for whom she works.

—with additional reporting by Chinmayi Sharma

Got a tip? Send it to The Daily Beast here.