Archive

Removing the Kahane Google App Isn’t Censorship

Response

Fighting speech with more speech is a powerful slogan, but is banning extremist material always a bad idea? David Sterman defends Google's decision to take down the Kahane Google App.

articles/2013/07/11/removing-the-kahane-google-app-isn-t-censorship/kach-google-openz_pcftgu

In a recent Open Zion column, Zack Parker criticized Google’s decision to take down a Google App containing Kahane quotes, to which the radical settler extremist Baruch Marzel had linked, as censorship. While the objective of preserving free speech is pure, the criticism of the takedown as censorship misunderstands the nature of free speech and the implementation of the criticism would be a severe blow to counter-radicalization efforts.

articles/2013/07/11/removing-the-kahane-google-app-isn-t-censorship/kach-google-openz_ilzm56

Censorship is definitely something to be opposed. It violates core American values of free speech and open debate. Additionally, it just doesn’t work. As a recent report on online radicalization authored by Peter Neumann and published by the Bipartisan Policy Center notes, legal action against particular websites is limited by the transnational nature of the Internet and the inherently national nature of bans on content. European countries that have tried to ban extremist content have had little success because of the ease with which websites can be shifted to a host in another country while continuing to be accessible in the country in which the content is banned.

However, a very clear line must be drawn between censorship by the government and the implementation of terms of service that remove extremist content by private companies, at times with informal government support. Allowing private companies and individuals to assert terms of service that prohibit hateful materials is not censorship. It’s a way of signaling social disapproval that is essential to the preservation of a vibrant public sphere.

When free speech is understood to mean not only restrictions on government censorship but also private shunning, the public sphere suffers. Instead of tyranny via silencing, a tyranny of disruption is imposed as false and extremist information colonizes the discussion. A powerful demonstration of this fact is a comparison of the open space of the comments sections of most websites, which rarely contain insightful commentary and are littered with hateful rhetoric, and the more constrained and truly impressive comments section of Ta-Nehisi Coates’ blog at The Atlantic.

While fighting speech with more speech is a powerful slogan, not every speaker nor every ideology is deserving of a response. Sometimes more speech on a topic encourages a community to spend time debating that which ought not be up for debate instead of the true issues of the day. Debating Holocaust deniers only gives them a platform they otherwise would not have. Putting “scientists” who deny global warming’s existence on the news belies the fact that there is no longer any true debate in the scientific community over the existence of human-caused climate change. The advocacy of a mass expulsion of Palestinians through terrorism should not be up for debate and its advocates should not get prime space on the airwaves, in the local bookshop, or on Google Play. To be sure, the government shouldn’t use its coercive apparatus to ensure such an exclusion, but that doesn’t mean private companies should not have qualms about allowing their services to be used to spread such extremist content.

Besides, implementing a principled stand against the removal of the application as censorship would have severe consequences for efforts to counter radicalization online. While having Google or any other web service enforce terms of conduct will not prevent hardened extremists or researchers from finding the materials they seek, it will prevent those with little knowledge from encountering extremist material while conducting general searches on a topic, as the Bipartisan Policy Center report notes. In the context of Jihadist twitter accounts, for example, there is empirical data that suggests taking down extremist twitter accounts reduces their readership.

Of course there are policy questions that must be evaluated regarding whether urging Google to take down the Kahanist app is a worthwhile use of resources, whether the Kahanist app actually has a radicalizing influence that may promote violence, and whether the decision to take down the app would generate a counterproductive backlash. But these are questions of policy, not principle. They should be evaluated based on data and research. It is for this reason that the Bipartisan Policy Center report recommends the following approach to takedowns based on increased informal cooperation to ensure effective takedowns:

Government should accelerate the establishment of informal partnerships to assist large Internet companies in understanding national security threats as well as trends and patterns in terrorist propaganda and communication, so that such companies become more conscious of emerging threats, key individuals, and organizations, and find it easier to align their takedown efforts with national security priorities.

As communities struggle to combat radicalizing influences in their midst that have been given new life by the Internet, it is critical that we not adopt an absolutist vision of free speech that extends the necessary protection from government censorship into the realm of private dealings. Takedowns are a necessary tool of counter-radicalization. The increasing ability and willingness of private companies to take down extremist material will help ensure a more vibrant and freer public sphere.

Got a tip? Send it to The Daily Beast here.