Archive

Most Comments Are Horrible—Sites Look For Ways To Make Them Better

Jesse Singal reports on the latest attempts to stem the flow of Internet bile on news sites.

articles/2012/07/16/most-comments-are-horrible-sites-look-for-ways-to-make-them-better/internet-commentors-singal_olhba7
Stockbyte / Getty Images

If you want a bracing look at just how infantile Internet comments get, you could do worse than glance at a recent Huffington Post article on Arizona’s immigration law.

“IN AZ when I see an illegal i bump into his car, call the cops and watc wahts happens lol,” wrote “Come Out and playSB1070,” who, one hopes, is not an English teacher.

“This is typical of left wing liars,” opined Bebesc, who was apparently accusing President Obama of going soft on immigration. “These people are running this country into the ground with their lies and misinformation.”

ADVERTISEMENT

The screechiest liberals on the board didn’t do much better. “We don’t need no stinking Facts!” “Ex-Fed” wrote. “We have Rupert’s Fake News to tell us what to hate and what to fear. Life is so simple for the mouth breathing fox-bats out there, it’s like no thinking, and no homework, ever!”

Suffice it to say there was not a lot of cogent discussion of immigration policy. And although moderators quickly started excising the worst of the worst comments, any intelligent discussion that might have approached this particular story page was likely scared off.

Of course, since the beginning of the Internet, comments sections—which combine our love for barstool pontification with the allure of instant, worldwide publication—have always been a sort of digital Wild West, in which leathery cowboys are replaced by pasty people with names like RedDog1974 slouched before glowing screens in darkened rooms, shooting first and thinking later, if at all.

The Huffington Post isn’t alone, of course, in enduring the fetid wave that is the vast majority of Internet commenting. Every major online-news site—The Daily Beast included—has its share of commenting issues. (Perhaps nowhere is the scene as bad as it is at YouTube, whose developers have acknowledged the problem and say they’re working on a solution.)

So how should community managers tamp down the vitriol yet keep the discussion alive? Most larger online-news outlets realized long ago that you can have a fully open, democratic commenting space, or you can have intelligent conversations—but, generally speaking, you can’t have both.

articles/2012/07/16/most-comments-are-horrible-sites-look-for-ways-to-make-them-better/internet-commentors-singal_mum7m8

Academic research into online civility backs this up. Kate Kennski, a political-communications researcher at the University of Arizona, is currently studying online behavior in discussion settings. She and her colleagues are still early in their research, but she did say that they’ve found that 15 to 20 percent of all online comments contain some form of name-calling. The level of vitriol varies from site to site, which buttresses the idea that social norms greatly impact online discourse. “Different communities as well as different boards do establish their own norms,” Kennski said.

Community standards are surely an important part of it. Just as a neighborhood’s social norms have great power to determine residents’ behavior—do people throw trash on the street? blast loud music at all hours?—online communities are strongly shaped by the prevailing standards. Keep these standards high, it’s reasonable to assume, and an intelligent conversation will follow.

In general, though, it’s become clear that the old model of anonymous free-for-all commenting is on its way out. News sites both large and small are requiring online commenters to post an identity, often by connecting them to a Facebook profile.

“People, when they are anonymous, don’t feel the same level of responsibility for their actions,” Kennski said. “And when you know that your words can come back as a direct reflection of you, as something that can be carried around, you may decide to temper your comments in some way.”

But some people will continue to spout invective even when their name is attached to their comments, and so larger news outlets, such as NPR, have instituted heavily moderated comment policies.

Previously, the station’s website, npr.org, only intervened to moderate comments that readers had specifically flagged. But starting in March of last year the site introduced a new policy requiring all prospective commenters to endure a period of pre-moderation—that is, none of their comments go up before a moderator has had a look. Once the user’s first set of comments have passed muster, they’re able to comment freely and without moderation. But commenters who regularly and consistently flout the site’s commenting rules get put back on probation.

A large part of the reasoning behind this approach, said Kate Myers, NPR’s product manager for social-media tools, is to weed out vitriolic drive-by commenters—people who are often referred to NPR by an outside link and who show up at an article just to cause trouble. “They’re not part of our regular, consistent community,” Myers said.

Myers acknowledged that there are tradeoffs inherent in this approach, which runs counter to the Internet’s wide-open, anything-goes ethos. “We make those calculations and make those balances every time we talk about making changes,” she said. “Because we really are committed and believe in the idea of the free and open community. But we know that we want to have these sometimes conflicting goals of encouraging a safe space for people to comment and to have a civil discourse.”

This all sounds very nice, but maintaining that safe space isn’t cheap. The New York Times, for instance, also employs a raft of moderators—three full-time and 10 part-time. One of the full-timers is Erin Wright, who has been a moderator for the Times for the past five years. Based in Philadelphia, she works from 6 p.m. to 2 a.m. The job, she said, is “sort of like coming in and being a referee in the middle of the game.”

Wright said she moderates between 500 and 600 comments on a given night and “easily 800-plus” on a big news night—or nearly two per minute. She lets through about 70 percent of the comments that come across her screen, she estimated. “It’s gotta be on-topic, first and foremost,” Wright said. “That’s what I look for immediately. If I’m moderating a comment thread on horse racing and someone says, ‘Obama sucks,’ that’s out immediately, and that happens quite a bit.”

Wright also makes sure “there is no name-calling either of each other or any attacks on Times reporters or management, and that there’s a good, healthy dialogue from all sides.”

Without moderators, Wright said, things would get ugly fast. “The stuff that we reject is pretty virulent,” she said. “People try to get the most disgusting comments through us.”

Bassey Etim, the Times’ community manager, said his driving philosophy is that “when you’re coming to the Times, what you’re coming for is urbane and literate content, and there’s no reason for comments to be held to a lower standard than that.”

“If your comment is incoherent, we don’t approve it,” he added. “If you use all caps we don’t approve that. If your comment is clearly just trolling we don’t approve that either.”

The Daily Beast employs two part-time moderators, who generally watch for flagged comments, but at times the site takes more extreme measures. When comments on a recent story about pro-life activists became inappropriately vicious and personal, we closed off the comments completely.

***

But most news outlets don’t have the resources to hire a dedicated staff of comment moderators. One less costly approach, still in its early stages, is to use technological solutions which hold out the promise of making flame wars easier to ignore, if not quite eliminating them altogether.

In 2009 Srikanth Narayan created tldr (Internet shorthand for “too long; didn’t read”) for his master’s thesis project at the Berkeley’s School of Information. Narayan told The Daily Beast he was inspired by the link-sharing site Reddit, which often features robust discussion.

“I was on Reddit from the days it was starting out,” said Narayan, who now works at Tidemark Systems in San Francisco, “and what attracted me to Reddit was the community actively discussing all these stories that were coming out, and the quality of commentary that was building up.”

But that quality got diluted as the site expanded into the behemoth it is today, he said. “There would be thousands of comments, and you would drown in all these comments.”

Working with his adviser Coye Cheshire, Narayan developed tldr to help users navigate the maze of comments. The system examines the structure of a conversation—is it a lot of quick back-and-forth comments (suggesting a flame war), or longer posts?

Employing this structural examination, and also taking into account the degree to which each comment gets upvoted or downvoted by other users, Narayan and Cheshire developed an elegant visualization system that makes it easy, even in a thread of thousands of comments or more, to see where (likely) productive conversation is going on and where things have devolved into a flame war.

Each comment is portrayed as a block and color-coded to indicate whether it is popular or not. Replies stack downward, leading in some cases to long, stalactite-like structures. Users can gauge the tone of a discussion by its shape and color, allowing them to quickly jump down to the most productive parts of a given discussion.

Using tldr, Narayan wrote in an email, “it is easy to discern threads where the discussion has tended to be verbose and expressive, or a thread where most of the comments are one-liner quips.”

The nifty part of tldr is that it eliminates the need for costly moderators. The flamers can hurl invectives at each other all they want, but those seeking a productive discussion, empowered by a bird’s-eye “map” of the thread, can avoid getting sucked in.

“You always know where there are flame wars happening,” said Narayan,” and you don’t need to look at them if you don’t want to, and you can just skip over them and go to the [conversations] you’re interested in.”

***

For those who don’t have the money for moderators and don’t want to deal with trolls and flame wars, there’s always the nuclear option: turning off comments altogether. The most prominent blogger to have done this is probably The Daily Beast’s Andrew Sullivan. Sullivan, one of the earliest bloggers, said his initial decision not to have a comments section was purely pragmatic: a dozen years ago, when blogging was in its infancy, no one knew how or why libel laws would apply, and Sullivan didn’t want a hotheaded commenter to land him in legal jeopardy.

(In fact, federal law largely protects websites from liability over user-generated comments.)

But he’s retained his no-comments policy despite financial incentives to do otherwise. “The one thing comments give you is easy pageviews,” he said. “Commercially, it makes sense to add comments simply by virtue of the fact that you have a little group of crazies who can build up your traffic just by their constant frenzy of fights.”

Instead, Sullivan encourages an ongoing civil conversation his own way: by posting and responding to readers’ emails—particularly those that take him to task for this or that—in essence moderating a conversation that occurs not in a comments section, but on the blog itself.

He and the team that now works for him realized that this could be a powerful method. “If we treated them right and we gave them space, they were very good writers,” he said. “I mean really smart, interesting, nuanced, knowledgeable people. So we decided that this is a great thing.”

But don’t his readers get annoyed at not having a comments section? Three years ago or so he put it to a vote. “Our readers voted 2-to-1 against comments.

“The good news is there really are people out there—readers, consumers—who don’t want the sort of propaganda scream-match flame wars of large amounts of the partisan blogosphere,” Sullivan said. “And if you give them a space to really bring their expertise or their stories or their lives, and make sure that it’s a safe space—you don’t abuse them, you don’t set them up for mockery, you respect each and every one—then I think we basically got an unpaid staff of about 1.3 million people.”

The story has been updated to include a reference to a controversial Daily Beast article on which comments were recently closed.

Got a tip? Send it to The Daily Beast here.