In a move that will crack down on some of the most extreme and well-organized extremist groups, Facebook’s security team said Thursday that it will more aggressively go after groups that engage in “Coordinated Social Harm”—a term aimed at online clubs that are evading the company’s existing rules while engaged in threatening offline behavior.
Nathaniel Gleicher, Facebook’s head of security policy, outlined a three-part test in a phone call with reporters to define what qualifies as coordinated social harm under the new enforcement regimen. First, groups must “have a systemic history of violations on our platform,” are “tightly coordinated on our platform working together to either evade our enforcement or maintain their persistence on our platform,” and are contributing to or driving significant social harm.”
Examples of the kinds of “significant social harm” that could violate the policy, according to Gleicher, include “accelerating violence or undermining trust in critical medical understanding or could include presenting violence as a legitimate response to address government programs.”
ADVERTISEMENT
While the new policy applies to a range of groups involved in offending behaviors, anti-vax activists served as the first case for enforcement under the new policy.
A case in point is the German group “Querdenken,” a conspiracy movement that protested lockdown rules and vaccines, and is known for organizing protests which frequently turn violent and attract members of the country’s far-right and QAnon conspiracy theorists.
Querdenken served as the first case for the rollout of the new policy, Gleicher said. And while the definition of coordinated social harm is broad enough to encompass groups involved in a range of activities, the new policy is likely to cause problems for the most severe anti-vaccine groups, many of which have already repeatedly tangled with the company’s moderation and security teams.
Gleicher said Querdenken activists “typically portrayed violence as the way to overturn the pandemic-related government measures limiting personal freedoms” and “engaged in physical violence against journalists, police, and medical practitioners in Germany.”
Activists for the group on Facebook “repeatedly violate[d] our Community Standards, including posting harmful health misinformation, hate speech, and incitement to violence,” according to Gleicher, and the company removed a number of associated pages and groups and blocked Querdenken websites from being shared on Facebook.
Many of the activities prohibited under Facebook’s coordinated social harm policy—including coordinating to conduct offline violence or disrupt public health efforts and spreading health misinformation—were already prohibited under the company’s community standards.
But a number of groups and networks on the company’s platform, including many in the anti-vaccine movement, have become adept at evading bulk moderation of prohibited content by speaking in coded language, like labeling groups as dedicated to organizing “dance” and “dinner” parties, and setting up duplicate “backup” accounts as fallbacks in the event of a suspension.
Facebook says the kinds of groups that fall under coordinated social harm enforcement fall between the existing teams focused on offending content and behavior.
“We are seeing groups that pose, quite frankly, a risk of significant social harm that also engage in violations on our platform but don’t necessarily rise to the level” for enforcement under the company’s dangerous organizations unit, which targets terrorist groups and violent organizations, or coordinated inauthentic behavior, which focused on enforcement against coordinated influence operations where trolls disguise their identity to inorganically amplify political messages.
While the new policy amps up the pressure on hardcore extremist networks like anti-vaxxers, Facebook officials emphasized that they intend for enforcement under the new policy to be “careful and deliberate.”
“One of the reasons we've been spending so much time establishing these thresholds is the recognition that we have to be incredibly deliberate in ensuring that we are writing policies and protocols that address the core adversarial and harmful networks without necessarily making organized authentic and non-harmful networks into a violation," David Agranovich, Facebook’s director of threat disruption, told reporters.