Politics

How Block Bot Could Save the Internet

Civility!

A new Twitter screening program allows users to stiff-arm the disagreeable. If we can’t save ourselves from each other, maybe algorithms can.

articles/2015/03/17/how-block-bot-could-save-the-internet/150316-poulos-block-bot-tease_pbyqp2
Photo Illustration by Emil Lendof/The Daily Beast

When Benjamin Netanyahu recently ruffled feathers by forging ahead with a speech before Congress, predictable arguments flew. One side pronounced the address a must-listen, not just on news grounds but on moral ones. The other called for a boycott, a principled refusal to pay attention. Intriguingly, no major voice advanced the old-fashioned liberal argument that John Stuart Mill made famous: Silence no voice, so that contending ideas may clash and the truth may emerge. That’s how free democracies form credibility and legitimacy, Mill quaintly believed. Today, whatever our differences, our minds are increasingly made up: If we think you are wrong, we don’t want to hear from you.

Nowhere is this attitude on stronger display than the Internet. There, confidence breeds arrogance, and arrogance hostility. Not only do we ask people to shut up, we command them—and thanks to the power of spontaneous organization, our commands take effect, and our commanders revel in taking no prisoners.

What to do? Even more than a “crisis of free speech,” we confront a problem of functionality. It’s just too difficult to make one’s way through the Internet—assuming even an entry-level desire to communicate about things that we care about—without hitting a veritable fusillade of rotten vegetables, eyelash-curling invective, and personal attacks more vile and degrading than Mill could have possibly imagined.

ADVERTISEMENT

The “lively debate” and “robust discussion” of a liberal society requires a baseline of manners and mercy that, today, far too many online denizens simply lack. Yes, we could argue about which teams and which camps argue in better faith than the others. But that would be to obscure the point: The guilty parties are everywhere. Ideology alone is not to blame. Spamming and trolling and policing and outright despising have come to define the online experience, and a workable solution just will not be found in the annals of 19th-century liberal political theory.

We need a better way—one that’s better tailored to online life because it operates in the same way, according to the same logic.

Believe it or not, we have one.

Meet The Block Bot, an invention of the social-justice left that allows people to automatically screen out disliked content and disliked people from Twitter. The Block Bot comes complete with a helpful hierarchy of disapproval, ranging from mere irritation to bigotry in the first degree. Some people who have been added to The Block Bot’s rolls have been offended, of course. But in addition to muting offense, The Block Bot dissipates rancor.

Turns out, The Block Bot helps us see how “breaking down boundaries” isn’t the panacea our creative and optimistic culture so often claims it to be. Turns out, the Internet will always contain people too toxic or just too annoying to put up with on an infinite, unending basis. Screening them out has become a must, and not just for social-justice types. For all of us.

By now, we have all grown quickly accustomed to the idea that algorithms can do better than we do alone at finding, ranking, and realizing our preferences. Tinder, the gold standard of online dating apps, has swiftly begotten a first wave of niche-specific clones, catering to users united around very particular identities and tastes. Today, seniors, rural residents, and practitioners of BDSM; tomorrow, the world. Enterprising app hackers, meanwhile, have created bots designed to customize even further the process of prospecting for matches, with remarkably effective results. The better the bot, the better the experience—from the standpoint of process and results. Online, the two go closely hand in hand. The more “frictionless” and smooth is the former, the more relevant, satisfying, and pure is the latter.

The next step in the evolution of association on the Internet is to re-create this experience in filtering people out, not filtering them in. In the realm of romance, it’s actually quite difficult to drum up affinities without powerful software. But in the realms of politics, culture, and identity, it’s much easier. You don’t need an elegant app or an expertly assembled bot to find your team, tribe, or community online. You just need Reddit, or Twitter, or Facebook, or Tumblr—the list goes on and on. It’s just because community formation is so easy that we have so many battling clans online. And since it’s so much easier to tune in friendly people than tune out hostile ones, we’re stuck with an Internet aswarm with verbal combat, battles every bit as dangerous as anti-liberal pessimists like Thomas Hobbes envisioned in the absence of an all-powerful overseer.

As much as we want an online Leviathan when it comes to basic security, we yearn for a way to limit unbridled speech without submitting to a single standard of value. If we often want to pull up the drawbridges on our conversational castles, we want to decide when, how, and for whom, with the same intuitive and fine degree of control we’re accustomed to in seeking out just the right product, just the right entertainment, or just the right mate online.

And why not? What objection could we possibly have to handling communication in this way? A fearful, nostalgic sense of guilt at just how illiberal it might be? Nonsense. We’d never let a mob of insufferable people—much less those at odds with our very existence—into our homes, or even into public and semi-public areas we frequent, like churches, restaurants, or parks. Instead of clinging to the unreasonable expectation that people impose the same self-restraint online that they still mostly do in real life, we need to accept the ’net for what it is and accept a new modus operandi.

The bots can help us here, where we cannot help ourselves. It’s time to let them. As Chappie might put it, humanity’s last hope for decent online discourse isn’t human.

Got a tip? Send it to The Daily Beast here.