Tech

Facebook Tells Congress New Zealand Shooting Video Wasn’t ‘Gruesome’ Enough to Flag

‘NOT ENOUGH GORE’

‘You mean we have all this technology and we can’t pick up gore?’ one congressman told the Facebook rep. ‘How many heads must explode before they pick it up?’

exclusive
190409-woodruff-nz-tease_bnv8mc
Reuters

When a Facebook official tried to help members of Congress understand the company’s struggle to get the horrific New Zealand shooter video off of the social network, it didn’t go over too well.

On March 27, representatives from four social media companies gathered with members and staff of the House Homeland Security Committee for a briefing. The focus: how the Global Internet Forum to Counter Terrorism—an industry group composed of Facebook, Twitter, Google, and Microsoft—had responded to the New Zealand shooting. The shooter killed 50 people at two mosques in Christchurch, New Zealand, and live-streamed the massacre; the livestream stayed online for an hour, until New Zealand law enforcement asked the company to take it down. Users successfully re-uploaded the video of the murders onto Facebook hundreds of thousands of times.

The members of Congress who gathered for a closed-door briefing had lots of questions for Brian Fishman, Facebook’s policy director for counterterrorism. One of the biggest: Why didn’t Facebook’s counter-terror algorithms—which it rolled out nearly two years ago—take down the video as soon as it was up?

ADVERTISEMENT

Fishman’s answer, according to a committee staffer in the room: The video was not “particularly gruesome.” A second source briefed on the meeting added that Fishman said there was “not enough gore” in the video for the algorithm to catch it.

A source briefed on the meeting added that the Facebook rep said there was ‘not enough gore’ in the video for the algorithm to catch it.

Members pushed back against Fishman’s defense. One member of Congress said the video was so violent, it looked like footage from Call of Duty.

Another, Missouri Democrat Rep. Emanuel Cleaver, told The Daily Beast that Fishman’s answer “triggered something inside me.”

“‘You mean we have all this technology and we can’t pick up gore?’” Cleaver said he told Fishman. “‘How many heads must explode before they pick it up? Facebook didn’t create darkness, but darkness does live in Facebook.’”

“That darkness will live right there in Facebook until Facebook can declare all-out war on darkness,” Cleaver continued. “If they’re unable to get that done, I guess I should just tell my children and my grandchildren, from Shakespeare, ‘Something wicked this way comes’—because that is what we would expect on a daily basis if Facebook can’t come up with a way to stop it.”

Facebook’s filters can move fast. Before reaching deals with record labels, Facebook rapidly pulled down videos featuring copyrighted music, as Tech Times detailed. And when Sen. Elizabeth Warren posted campaign ads criticizing the social media giant, they temporarily pulled them down. And just last month, Facebook announced the new ability to use artificial intelligence to remove revenge porn before anyone even reports it.

But Fishman still had a point, according to a source who’s worked extensively on terrorists’ use of online propaganda and spoke anonymously because of professional concerns.

“While the industry has made significant advances in using machine learning to identify violent and terrorist content, it still has a long way to go,” the source said. “And Brian was giving, however inartfully, an honest answer about some of the limitations of Facebook’s technology.”

Spokespersons for Facebook and for the committee declined to comment because the meeting was behind closed doors.

The company, meanwhile, put up a blog post after the shooting addressing the criticisms of its artificial intelligence systems.

“AI systems are based on ‘training data,’ which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video,” the post reads. “This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.”

Lawmakers have grown increasingly frustrated with the social media giant over the years, as it struggles to rein in extremist content—while exerting more and more control over what Americans see and read. The company faces pointed scrutiny from Capitol Hill, and has become a top target for both the right and the left. Many Hill progressives, including Rep. David Cicilline who heads a panel on antitrust, argue Facebook and other tech giants are too big for their own good and should face tough antitrust enforcement. Sen. Elizabeth Warren, the Massachusetts Democrat, has even made that argument part of her presidential platform.

Meanwhile, many Republicans argue that Facebook and other social media behemoths stifle conservative points of view. And just about everyone is upset about Facebook’s failure to keep Russian bots and trolls from spreading incendiary content before the 2016 election.

As a result, criticizing Facebook has become one of Capitol Hill’s rare bipartisan pursuits.

But while its government-relations challenges are sizable, its artificial-intelligence challenges are mammoth.

“The first 12 minutes was just him driving around, so any review—technical or human—that would’ve occurred would just see a guy mouthing off in a car,” said Danah Boyd, the president of the technology research institute Data & Society. “Even experienced researchers thought it might’ve been some kind of joke at first.”

I guess I should just tell my grandchildren, from Shakespeare, ‘Something wicked this way comes’ — because that is what we would expect on a daily basis if Facebook can’t come up with a way to stop it.
Rep. Emanuel Cleaver

The Christchurch shooter also avoided setting off too many warning bells for Facebook, Boyd noted. When he uploaded to the original shooting video to 8chan, for example, he didn’t include a direct link to Facebook, so that the social network wouldn’t see a surge of suspicious traffic from the troll haven.

From there, the video was uploaded to Facebook over 1.5 million times, according to The Washington Post. The social network’s automatic gate-keepers kept 1.2 million of those uploads out. (“Matching two videos frame-by-frame once you have the original is relatively easy,” Boyd notes.) But 300,000 made it through.

In this way, Boyd said, the Christchurch shooter and his digital supporters weren’t “just doing this horrible, first-person shooter thing. They were testing [Facebook’s] system, and paving the way for future attacks.”

—with additional reporting by Noah Shachtman

Got a tip? Send it to The Daily Beast here.