Why did Facebook delete a call for an anti-fascist rally in Hungary?

Facebook’s automatic detection of the word ‘Jude’ led to the blocking of A Hungarian anti-fascist group’s post. Tamas Szigeti explores the worrying implications of automatic filtering for freedom of speech.

Hungary is in political turmoil and far right ideas, literature and politics are back in force. The main far right party, Jobbik, gained more than 15% of the popular vote in the 2010 election, becoming the third strongest party in parliament.

Since then Jobbik has been using the floor of parliament for espousing ideas that range from segregating kids of Roma origin in schools ,through talks about hate crimes committed by Roma people against Hungarians, to questioning the loyalty of Hungarian Jews to their motherland. On November 26th, a far-right MP urged the government to draw up lists of Hungarian Jews with Israeli passports because of “their posing a risk to national security” in the midst of renewed violence in Palestine. Unsurprisingly, drawing up ethnic lists is a mad proposition, especially in a country that once collaborated in the mass murder of half a million of its Jewish citizens and tens of thousands of Roma people. Unlike on many previous occasions, where the populist right-wing government led by Viktor Orbán had remained silent or even used far-right symbolism, the whole political class condemned the unacceptable utterances this time.

There has been a serious worldwide debate going on about hate speech for decades. People of goodwill disagree on the issue. Yet those who oppose hate speech laws (such as Ronald Dworkin), and those who back them (such as Jeremy Waldron) would surely agree on the right of anti-fascist people to organise a protest against the impugned utterances. Facebook apparently disagreed.

The day after Marton Gyongyosi, a Jobbik member of parliament, hinted at the idea of a Jewish list, a handful of civil activists started to organise a flash-mob protest against antisemitism by setting up a Facebook event page calling people to the Parliament the next day. They urged the participants to display a yellow star with the word ‘Jude’ inside, the badge that Jews had to wear in Nazi Germany on their jacket. This was intended as a powerful message to show nationwide solidarity in a deeply divided nation. The event was also publicised by the hugely influential opposition movement, the Milla (One Million For Freedom of the Press) on its Facebook page, that would guarantee high publicity. Or that was the hope.

Hours after the event was posted on the social community site, it was suddenly removed by Facebook. Why? According to the official explanation, Facebook detected and then automatically removed the hateful content, that is the yellow star with the word ‘Jude’.  (Facebook issued a one-off  pop-up message warning about the detection of  inciting content which does not adhere to its policy, and that it is therefore automatically taken down. Afterwards, the post and the pop up message disappeared so that it cannot be linked to.) Weeks after the protest occurred, Facebook still owes a proper answer to the organisers, since the post on Milla‘s page remained removed. The policy makers of Facebook almost certainly did not intend to block the anti-fascists’ right to use Nazi-style Jewish badges on the internet to fight against neo-Nazis. but it shows the perils and capriciousness of automated content regulation.

Those who are prepared to defend FSD’s draft principle 2, the freedom of the internet from any illegitimate encroachments, should take seriously the risks of automatic filtering that is intended to serve (some would say) acceptable speech restrictions. The unintended consequences simply cannot be ignored.

Tamas Szigeti is a Weidenfeld Scholar and Mphil candidate in law, Oxford Faculty of Law.

Read more:

Comments (1)

Automated machine translations are provided by Google Translate. They should give you a rough idea of what the contributor has said, but cannot be relied on to give an accurate, nuanced translation. Please read them with this in mind.

  1. It’s unfair. I can’t understand the excuse of this matter. Automatical actions certainly can be dominated by Twitter and Facebook. This explanation is too ridiculous and weak.

Leave a comment in any language


Swipe left to browse all of the highlights.

Free Speech Debate is a research project of the Dahrendorf Programme for the Study of Freedom at St Antony's College in the University of Oxford. www.freespeechdebate.ox.ac.uk

The University of Oxford