Tech

Instagram Comment Moderation Is Coming Soon to an ‘Internet-Famous’ Person Near You

NEVER READ THE COMMENTS

The Facebook-owned photo sharing service is rolling out powerful new moderation tools to combat harassment online—but when will those be available to the average user?

articles/2016/08/01/instagram-comment-moderation-is-coming-soon-to-an-internet-famous-person-near-you/160801-allen-instagram-blocked-comments-tease_arwlxv

If you’re famous, you’re in luck: New Instagram tools to fight online harassment are coming your way. If you’re everyone else, be prepared to continue fighting trolls and flame wars the old-fashioned way.

Instagram’s head of public policy told The Washington Post Friday that the popular photo-sharing service has “slowly begun to offer accounts with high volume comment threads the option to moderate their comment experience.”

High-profile users will soon be able to filter out comments that use offensive terms and turn off comments on specific posts. But while the former feature is reportedly coming soon for all users, the Post could only report that the latter feature “may roll out to more accounts in the near future” (emphasis added).

An Instagram spokesperson told The Daily Beast that the purpose of giving popular accounts access to these features first is to ensure they function properly before being offered to more users. The company did not respond to a request for comment about whether or not all Instagrammers could expect access in the future.

With this move, Instagram will soon join Twitter in giving famous people—or, at least, “internet famous” people—access to exclusive quality control features that so many other users need.

Since at least March 2015, Twitter users who have been verified with a blue checkmark have had access to a “quality filter” setting that, when activated, “aims to remove all Tweets from your notifications timeline that contain threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts.”

This reporter can confirm that the “quality filter” makes Twitter a more tolerable place to work and socialize.

But given that threats, abusive behavior, and spam are already against the general Twitter rules, it is unclear why this feature would only be available to the 193,000 users who have been verified, many of them journalists, athletes, or entertainers.

That still leaves hundreds of millions of users with no automatic means to filter out abusive tweets. This includes millions of young women who, as the Pew Research Center noted, “experience certain severe types of harassment at disproportionately high levels” and people of color who routinely receive racist abuse online.Unverified users can still mute specific terms, block users, or import a list of users to block but that requires time and careful management. Others have to rely on third-party blocking services to filter out frequent offenders whose tweets might otherwise be caught by the “quality filter.”

Most recently, after Ghostbusters star and verified Twitter user Leslie Jones was targeted with a wave of racist hatred, Twitter CEO Jack Dorsey reached out to her directly—an invitation he would never be able to extend to every black woman on Twitter who has faced similar abuse.

As The Daily Dot’s Jaya Saxena wrote, “[Twitter’s abuse prevention initiative] doesn’t seem to be working very well. Or if it is, it’s only working for a select few.”

Twitter has already developed a reputation for responding most aggressively when the famous are targeted. Robin Williams’ daughter Zelda left Twitter after users harassed her over her father’s suicide, prompting Twitter to provide an official comment and suspend some of the accounts responsible for the abuse.

Twitter denied comment to The Daily Beast on the discrepancy between the verified and unverified user experience.

Not every social media service feels the need to divide their anti-harassment features between internet haves and have nots.

YouTube, for instance, affords every creator the option of manually approving comments. YouTubers can also turn off comments on any of their own videos—the same feature that Instagram is testing now and, even then, only for users with “high volume comment threads.”

YouTubers have long used these moderation features to curate their own comments sections and to preemptively avoid waves of abuse when vlogging about topics that they know will produce an intense reaction.

YouTube itself even makes use of these features. For example, when their official Spotlight channel posted a video celebrating LGBTQ Pride last month, YouTube ended up deactivating the comments because too many of them violated their hate speech policy.

It’s clear that Internet abuse and harassment isn’t going anywhere soon. According to Pew, 40 percent of internet users have been harassed online. Nearly 20 percent have experienced severe forms of online harassment such as stalking or physical threats.

In this environment, a social media service is only as good as the moderation tools that are available to all of its users.

Any given person is much more likely to become the victim of online abuse than they are to become a celebrity or an Instagram model. And as that becomes painfully apparent, it will only get harder for social media services to justify giving quality control features to a select few.

Got a tip? Send it to The Daily Beast here.