Has Innocence of Muslims ended the innocence of YouTube?
Join us to debate the role internet platforms like YouTube should play in setting free speech agendas in your country, your language and across the world. Online editor Brian Pellot kicks off the discussion.
An interior view of the US consulate, which was attacked and set on fire by gunmen in Benghazi (Photo by Reuters/Esam Al-Fetori).
The fact that Google, which owns YouTube, has voluntarily blocked the Islamophobic video clip Innocence of Muslims in Egypt and Libya has opened a spirited debate. Where YouTube is localised with country-specific versions of the site, Google routinely accepts government requests to restrict local access to content that clearly violates local laws. Google has restricted access to Innocence of Muslims in Saudi Arabia, Jordan, India, Indonesia, Malaysia and Singapore on these grounds. But for Google to preemptively restrict access without appeal from the Egyptian or Libyan governments based on what it decided were “very sensitive situations” in the form of violent protests is unprecedented and indeed troubling.
Free Speech Debate’s sixth draft principle says, “We neither make threats of violence nor accept violent intimidation.” Deadly demonstrations around the world, ostensibly fueled by outrage over Innocence of Muslims’ denigrating portrayal of the Prophet Muhammad, have been used to pressure governments, internet service providers and Google to block the video. Violent attacks are unjustifiable in any circumstance and must obviously be condemned, but caving in to violent intimidation can also be dangerous.
Days after US ambassador to Libya J. Christopher Stevens was killed in a Benghazi attack purportedly stemming from outrage over Innocence of Muslims, the White House and Australia urged Google to review the video against the company’s terms of service, a request Google denied. The company had already reviewed the video and publicly declared that it neither violated its terms of service nor constituted hate speech because it did not directly incite violence. No reference to violence or incitement is mentioned in YouTube’s terms of service, but the site’s community guidelines say, “we don’t permit hate speech (speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status, and sexual orientation/gender identity)”.
The distinguished American legal scholar and First Amendment expert Robert C. Post rightly argues that Innocence of Muslims, which I link to here, does not “attack” a group based on religion. But it does seem to “demean” Muslims and their faith. With YouTube’s community standards in mind, it is unclear why the video has not been removed or at least marked as offensive.
This video of a bishop claiming no Jews died in gas chambers during the Holocaust is preceded on YouTube by a message that reads, “The following content has been identified by the YouTube community as being potentially offensive or inappropriate. Viewer discretion is advised.” This warning likely resulted from viewers flagging the video as inappropriate under the “Hateful or Abusive Content: promotes violence or hatred: religion” tag. According to YouTube’s Hateful Content policy, “if a video that you have flagged or comment that you have reported hasn’t been removed, it’s because it doesn’t violate our hate speech policies”. As many viewers clearly find Innocence of Muslim’s message offensive and hateful, the absence of a warning at its start seems puzzling. It should, however, be noted that videos other religions might find comparably offensive have also not been removed from YouTube or branded “potentially offensive”. Christians could be equally outraged by this video of Jesus singing I Will Survive dressed only in a nappy and then getting run over by a truck, but the video remains accessible—with 10 million views and no warning of offence.
More startling than these apparent discrepancies has been Google’s near silence about its decisions. As of September 26, two weeks after the US ambassador was killed in the first round of protests over the video, not a single mention of Innocence of Muslims had appeared on Google’s official blog, public policy blog, or YouTube blog. The company did, however, issue several statements to the press.
On September 15, a statement was released saying: “We work hard to create a community everyone can enjoy and which also enables people to express different opinions. This can be a challenge because what’s OK in one country can be offensive elsewhere. This video—which is widely available on the Web—is clearly within our guidelines and so will stay on YouTube. However, we’ve restricted access to it in countries where it is illegal such as India, Saudi Arabia and Indonesia as well as in Libya and Egypt given the very sensitive situations in these two countries. This approach is entirely consistent with principles we first laid out in 2007.” These principles make clear the complex and challenging decisions Google faces regarding free expression and controversial content online. The 2007 post also explicitly states, “Google is not, and should not become, the arbiter of what does and does not appear on the web.” Is that not precisely what the company did when it voluntarily blocked Innocence of Muslims in Libya and Egypt?
As of September 26, requests had been received from governments to review or block Innocence of Muslims in 21 countries including those already mentioned. In Pakistan, Bangladesh and Afghanistan, where YouTube is not localised with a country-specific version of the site, Google rejected removal requests. The governments of those countries responded by blocking YouTube entirely. Bahrain, the UAE, Sudan and Kyrgyzstan all blocked Innocence of Muslims without even submitting a takedown request to Google. The Maldives, Brunei and Russia have also threatened to block the video, and an Arab-Israeli political party requested it be censored locally. Please add the latest updates from your country in the comment thread below.
Jillian C. York argues, “Although restricting the video in [Egypt and Libya] might seem tempting in the wake of the horrific violence that occurred in Libya, it is in the best interest of neither the company nor, arguably, the citizens of those countries for Google to be the arbiter of acceptability.”
Google capitulating to violent intimidation in Egypt and Libya could potentially save lives in the short term, but it could also set a dangerous precedent, opening a Pandora’s box of grievances every time controversial content is posted and someone or some group takes violent offence. This could in turn lead to greater censorship and ultimately greater violence if people decide that killing is the most effective way to air their grievances and win their way.
Brian Pellot is online editor at Free Speech Debate.
We neither make threats of violence nor accept violent intimidation.
Timothy Garton Ash
A personal introduction
The power of speech defines us as human beings. Language enables us to negotiate our differences in ways not available to most animals. Yet throughout history this power been used to animate us to kill other members of our own species. Even the most outspoken advocates of free speech therefore recognise that there should be limits to how far we can allow words and images to incite to violence. The difficulties begin when it comes to working out where the limits should be.