According to Disney CEO, Bob Iger; “Hitler would have loved social media”.The advent of cheap radio allowed Hitler’s hate speech to reach every corner of Germany. Newspapers no longer became publishers with editorial control but platforms for the Nazis to spread false, unfettered, and unmoderated stories about Jews. The Director of the Nazi Propaganda Ministry, Joseph Goebbels, is believed to have said “propaganda works best when those who are being manipulated are confident they are acting on their own free will.” To defend the lack of initiative taken in moderating hate speech on their platforms, CEOs of social media companies have often grabbed the low-hanging fruit – freedom of speech. However, this is done without acknowledging the axe on the branch; that hate speech is killing freedom. Content moderation should not trample on freedom of expression, but while human rights violations continue to occur due to hate speech and misinformation online, social media companies profit from the very algorithms and policies which allow these to occur.  


In the Netflix documentary ‘The Social Dilemma’, Cynthia M. Wong, former Senior Internet Researcher for Human Rights Watch, discusses how the weaponisation of social media can lead to severe offline harm. Wong cites the situation in Myanmar as a prime example. Members of the Myanmar military embarked on a lengthy Facebook campaign disseminating anti-Rohingya propaganda. Given that Myanmar has 18 million users, fake online stories such as one that circulated in 2014, claiming that a Muslim man had raped a Buddhist woman can have severe repercussions in inciting communal hatred. Human rights groups claim that the anti-Rohingya content propagated “murders, rapes and the largest forced human migration in recent history,” according to The New York Times. Having commissioned an independent report, Facebook admitted in 2018 that it was used to “incite offline violence” in Myanmar.  


In the aftermath of George Floyd’s killing, Donald Trump tweeted “when the looting starts, the shooting starts” in response to Black Lives Matter protesters clashing with the police in Minneapolis. Twitter took an unprecedented step in adding a warning to the tweet, saying it violated the company’s rules about glorifying violence. Facebook on the other hand, decided to keep Trump’s controversial posts, including one that used the same phrase as his tweet. This inaction provoked a rallying cry for Facebook to adjust its approach to hate speech. In July 2020, a coalition of non-profit organisations including the National Association for the Advancement of Colored People (NAACP) began the ‘Stop Hate for Profit’ campaign urging Facebook’s advertisers to pause their advertisements on the platform. Included in the coalition, the Color of Change says on its website that the “silencing of Black voices” and growing “hate, bias and discrimination” on Facebook is able to continue unhindered due to the annual $70B of revenue it receives from corporations. 

Facebook is not the only company that has allegedly supported “bias” and “discrimination”. A recent report by Amnesty International suggests that Facebook and Google have aided the Vietnamese government in censoring content that appears to criticise the authorities. Such reports refute the claim that social media platforms are politically “neutral.” Even if their intention is not to be affiliated with any particular political party, the monetised nature of their service has disproportionately silenced the voices of those speaking out for their rights, while amplifying those of people who wish to take them away. The advent of utilising social media as a tool for protest is limited given that it is “compromised by algorithmic violence manufactured for neoliberal ends,” says Padmini Ray Murray, founder of Design Beku. Indeed, it is this “algorithmic violence” which contributes to the rapid dissemination of hate speech, engineering a dangerous platform for the likes of white supremacists and Neo-Nazis. 


Social media companies often cite freedom of expression and the right to free speech as justifications for their controversial decisions to not remove – what many consider to be – hate speech content. The right to freedom of expression is enshrined by article 19 of The United Nations Universal Declaration of Human Rights (UDHR). However, this is not an absolute right and can be restricted by governments in exceptional circumstances, such as for the protection of national security or public order. Despite the failure of social media companies to effectively eliminate hate speech, many are also wary of governmental interference. CATO Institute argues that, “the history of broadcast regulation shows that government regulation tends to support rather than mitigate monopolies”.  

In the US, governments rather than private companies are restricted by the First Amendment, which protects freedom of speech. Therefore, social media companies can legally remove content that violates their community guidelines, even if it falls under the category of protected speech. An important distinction must be drawn between ‘publishers’ and “platforms”. Social media companies arguably fall in the latter category. Section 230 of the Communications and Decency Act (CDA) of 1996 protects tech companies from liability when their users upload illegal content, albeit with a few exceptions. Without this statutory protection, social media companies could be held legally responsible for a user’s defamatory comment, for example. However, social media companies’ failure to moderate hate speech online has led US lawmakers to consider amending this legislation.  

Ironically, by failing to remove hate speech, platforms incidentally curb free speech of particular groups and demographics. An investigation by Amnesty International revealed that Twitter was a “toxic place for women.” The report mentioned that experiences of hate speech and abuse led women to either self-censor what they posted or leave the platform entirely. 


Since Election Day on the 3rd of November, Twitter has flagged 152 out of Trump’s 578 tweets and retweets (as of the 23rd of November). A significant proportion of these are flagged for being misleading or containing misinformation about what Trump claims is election fraud, despite no supporting evidence. In early November 2020, Twitter banned Steve Bannon’s Twitter account. The former Trump advisor had called for the beheading of Dr Anthony Fauci, violating Twitter’s policy on the glorification of violence. Bannon’s comments were made in a video format that stayed on Facebook for ten hours before being removed. Based on how social media companies have responded to misinformation about COVID-19 and the 2020 US presidential election, it’s evident that they remain unprepared and lack effective planning.  

A recent report published by the Forum for Information and Democracy has put forth numerous suggestions on how social media can be regulated. One suggestion is to require that platforms “maintain up-to-date reference documents and release them to vetted researchers and regulators, on each core function of the algorithms…” Asked about whether the content moderation proposals would stifle free speech, Christopher Wylie – Cambridge Analytica whistleblower – said to the BBC, “…freedom of speech is not an entitlement to reach…you are not entitled to have your voice artificially amplified by technology”.  

It’s difficult to argue that people are completely acting on their own free will when the algorithms used, and the data provided to platforms are constantly personalising the type of content they will see. “The Russians didn’t hack Facebook. What they did was they used the tools that Facebook created for legitimate advertisers and legitimate users, and they applied it for a nefarious purpose”, comments Roger McNamee, an early Facebook investor. Every click, tap, like and comment we command has real-life consequences beyond our screens. Social media companies are complicit in contributing to human rights violations around the world. For all the data that these companies have about us, it is not enough to be a passive consumer. We need to actively call on social media companies to take content moderation more seriously and stop profiting from hate.  

Ayesha is an LLB student at the University of Leeds. As an aspiring barrister, she enjoys advocacy and has spoken at platforms including Tedx and GESF. She has a key interest in both Public and International law. She is also the founder of a student-led initiative ‘COSMOS’ that organises projects to promote the UN Sustainable Development Goals.

This post has been published in collaboration with Human Rights Pulse. Human Rights Pulse aims at building a platform that brings together human rights practitioners, policymakers, campaigners, and students to raise awareness of current human rights issues around the world and to promote solution-oriented discussion.