By Frauke Tepe*




It has long been recognized that words can be as hurtful as physical harm, which is why Hate Speech has been subject to discussion for a long time. In recent times, more communication has shifted to online formats than ever before. This new environment brings light to new challenges and gives a new platform to Hate Speech in a dimension that has never been seen before. Speech is available worldwide, across jurisdictions and on platforms that are in the force of private companies that have their registered offices in different countries than their users. Online communication does not only take place between friends and acquaintances, but also between complete strangers. While some of those communicants act under their full name, others choose to stay anonymous. The anonymity of the internet provides a shield for the speaker to spread hate. While Hate Speech has always been a sensitive topic and subject to many judicial rulings in different jurisdictions, the 21st century has brought to light new challenges on how to address Hate Speech.

Especially with respect to the rise of new media, the calls for banning Hate Speech are increasing in volume. Intermediaries are being asked to immediately delete Hate Speech: e.g. the Netzwerkdurchsetzungsgesetz (Network Enforcement Act) in Germany makes intermediaries liable if illegal content is not deleted within a certain period of time.[1]

Such efforts are to be positively recognized. However, one question is yet to be solved, namely what falls under the notion of Hate Speech exactly. There is no uniform definition to be found in international treaties or similar agreements. One of the consequences of this fact is that private individuals have the power to de facto decide which speech is prohibited – namely social networks such as Facebook and Co. The line between Hate Speech and criticism is often very thin and it can be difficult to distinguish between them. For example, Hate Speech can fuel racism and be discriminating, but publishing a scientific article on the correlation of race and crime can do the same.[2] Sellars wonders, “[w]ithout a clear definition, how will scholars, analysts, and regulators know what speech should be targeted?”[3]

This essay will draft a definition of Hate Speech that takes into account the realities of the 21st century. First, the necessity for a definition will be discussed, in order to build a foundation. When defining the notion of Hate Speech, different aspects must be considered such as the forms of expression, the harm, and the targeted group. One aspect to be considered is the different understanding of limited speech around the world. Different challenges with regards to freedom of expression will be considered leading to a final definition. In addition to examining definitions in international agreements and academic literature, but also closely consider definitions of social networks – those platforms that accommodate Hate Speech the most.




So far, there is no uniform definition of Hate Speech in any legally binding treaties. The term is not used in international human rights law. It is, for example, not enshrined in the European Convention on Human Rights (ECHR); furthermore, the European Court of Human Rights (ECtHR) has avoided limiting itself to a definition.[4] However, some legal texts address the issue of Hate Speech while not calling it by that name. The International Covenant on Civil and Political Rights (ICCPR) requires parties to prohibit “[a]ny advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence” by law.[5] In similar fashion, Article 4(a) of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) requires the parties to “declare an offence punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof”. The Rabat Plan of Action has tried to bring further clarity to the formulation of Article 20(2) ICCPR.[6] Despite these efforts, the term Hate Speech is still widely used as a “generic term” for threats to individuals and groups, but also for “cases in which people may be simply venting their anger against authority.”[7] This underlines the need for a uniform definition. The scope of the obligations under those provisions varies, so that no uniform framework can, as yet, be achieved.[8] Additionally, a common definition would also lower the risk of manipulation by governments, which is a common fear, especially in times of elections.[9]

Naturally, the question arises as to how much benefit an international agreement would bring, especially in light of the fact that different cultures/nations differ in their understanding of freedom of expression. For the desired effectiveness of the definition, the political will to implement it is required. All parties must eventually agree with the definition. This poses a challenge, as the interests of protection sometimes diverge widely. An international agreement has already been sought by the EU and is also advocated in literature, but the USA in particular has opposed such regulation.[10] Protected by the First Amendment of the U.S. Constitution, freedom of expression is “the most cherished American constitutional right”.[11] Its main objective is to protect the citizen from interference from the state.[12] Whereas the U.S. understanding of freedom of speech is shaped by individualism and libertarianism, in Western countries, on the other hand, honour and dignity build the foundation for freedom of speech.[13]

There have been several cases illustrating the high standing of freedom of expression in the U.S.  One of such cases is R.A.V. v City of St Paul in which a cross was burnt in the yard of a black family.[14] The U.S. Supreme Court held that an ordinance which prohibited racially motivated “fighting words” was unconstitutional. This was limited by Virginia v Black pursuant to which the criminalization of cross burning is constitutional if such is made with “intent to intimidate”. The cross burning itself cannot, however, be taken as prima facie evidence for such intent.[15] Thus, the U.S. system prohibits content-based restrictions on freedom of speech, whereas for example, the German Grundgesetz allows such restrictions.[16]

The difficulty of combining these two approaches is even greater in the online environment, as several jurisdictions may be affected by one hateful post: e.g. a user posts a hateful comment on a social media platform in one country that has its headquarters in a different country; additionally, the content is then available worldwide. In the Yahoo! case, the problem of differing understandings of protection in Europe versus the USA becomes apparent.[17] A French Court held that Yahoo! was liable for the sale of Nazi merchandise which was considered Hate Speech in France. Yahoo! argued that, since the content was uploaded in the U.S., the French jurisdiction wasn’t applicable; the French court dismissed that claim. Yahoo! then went on to a court in the U.S. and successfully sought a judicial ruling that the enforcement of the French decision would violate Yahoo!’s First Amendment protection.[18]

The case highlights the tension involved in regulating online speech extraterritorially and the powerlessness of states when it comes to cross-jurisdictional enforcement of court rulings.[19] An internationally valid definition of Hate Speech could overcome these differences. Cultural particularities would still have to be taken into account, but if, for example, social networks worldwide took a uniform definition as a basis, enforcement difficulties would no longer arise.  

In general, the transnational reach of private companies like Facebook could be an effective way to combat the problem of Hate Speech.[20] Especially in light of laws such as the German Netzwerkdurchsetzungsgesetz that imposes the obligation to delete Hate Speech, among other things, on private platforms, a uniform definition is appropriate. Otherwise, the risk arises that allowed speech is also banned since the networks want to avoid sanctions and thus take precautions by deleting every critical posting, leading to private censorship. By means of a definition, principles are established at international level, e.g. within the EU, by which Hate Speech can be determined.

This can even have a positive effect on freedom of speech. One example for that is the EU Code of Conduct on Countering Illegal Hate Speech Online.[21] In 2016 Facebook removed the content in 28.3% of received notifications, whereas in 2018 the number rose to 82.4%.[22] The numbers prove that, to a certain degree, a non-regulatory approach can be helpful.[23] They also suggest that a common understanding of Hate Speech laid down in the EU Code of Conduct is beneficial,[24] even if the individual definitions of the platforms differ. This does not only apply to intermediaries, but also traditional courts. A uniform definition would require courts to be more rigorous in their analysis, which could then lead to greater protection of freedom of expression.[25]

Having established the necessity of a definition that provides guidance for states, but also intermediaries, the wording of such a definition will be discussed in the following.




In drafting a definition of Hate Speech, it is important to balance the fact that the freedom of expression is a fundamental right and of importance for every individual’s personality and democracy as a whole.[26] It should be noted that freedom of expression “is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population.”[27] Thus, one wants to be careful to not define the notion of Hate Speech too broadly, so that protected speech is not included.


(1) Forms of speech


There have always been different ways of expressing oneself. This does not necessarily require oral or written speech; certain actions may also express resentment and hatred. This includes, for example, cross burning[28] and Nazis marching through areas with a predominantly Jewish population (Skokie[29]). Symbols such as swastikas and the robes of the KKK convey powerful messages without saying something out loud.[30] Using the term Hate Speech creates the erroneous impression that only spoken words should be regulated.[31] It should be clear from the definition that this is not the case.

With new possibilities that arise online, even more forms are being added. On some platforms, communication occurs only through videos or photos. So-called memes and gifs are very popular at present and communication only through emojis is also possible. There are endless possibilities to express oneself and to communicate hatred. This should be taken into account in a definition, so that all forms of hateful expression can be considered Hate Speech.


(2) Addressee


Traditionally, the characteristics of race, ethnicity, religion and nationality are taken as distinctive characteristics for Hate Speech. This is, for example, the case in Article 20(2) ICCPR. Article 6 of the Audiovisual Media Services Directive (AVMSD) at least prohibits “any incitement to hatred based on race, sex, religion, or nationality.” However, especially in today’s times, a focus on this alone is not enough. Since the AVMSD took effect in 2010, the understanding of discrimination has developed due to contemporary social justice movements and a broader field of addressees is necessary. Sex, gender, sexual orientation and disabilities are other characteristics that serve to exclude people. These have yet to be included in international standards.[32]

When choosing the addressee of Hate Speech, a distinction is made between a group-based approach and equal protection clauses.[33] The group-based approach is, however, criticized for being ‘under-inclusive’.[34] Google+ uses a mixture of both forms. It first lists different characteristics and then uses the open clause of “any other characteristic that is associated with systemic discrimination or marginalization”.[35] The wording still limits the group of possible addressees, but highlights the difference to cyberbullying, which can turn against anyone and differs in consequences.[36] Such a formulation is appropriate in order to be as inclusive as possible, while still taking into account the particularities of various cultures. For this reason, this wording is adopted for the proposed definition.

The speech must always be directed at an individual or an individualizable group.[37] Taking the claim “soldiers are murderers” as an example: the claim is not directed at a particular soldier, but at soldiers working for the government. Rather than construing this expression literally, this claim constitutes an opinion and is thus constitutionally protected. The German Constitutional Court, who had to decide this case, came to the conclusion that an interpretation of the slogan results in the understanding that it casts soldiers as much as victims as it does as killers. [38] If the statement was referred to an individual or an individualizable group, it would be more likely to be understood as Hate Speech.


(3) Harm


In order to justify limitations to the freedom of expression, the harm that individuals suffer from Hate Speech have to be considered.[39] This forms the most important part of the definition. Through careful wording, a line can be drawn between mere offensive/insulting statements and Hate Speech.

Especially problematic with Hate Speech online is that the connection between hateful expressions online and actual harm in the real world is difficult to trace.[40] This is true for any media, but the peculiarities of the internet, such as the anonymity of users and the speed with which messages are distributed to large audiences, make this particularly complex.[41]

Again, the difference between the U.S. and European approaches constitutes another challenge  in this regard. The U.S. Supreme Court has clarified in its past rulings that restrictions on the freedom of speech are only possible in the case of “incitement to violence”,[42] whereas the ECtHR considers incitement to hatred also sufficient.[43] However, when defining the scope of harm, it should be remembered that physical harm and psychological harm can have the same effect on individuals.[44] Manifestations of hatred can constitute “a degradation of human dignity” without causing violence.[45] Here it should be noted that even small words can be given meaning in a definition, so that “or” should be used instead of “and”. Violence and hatred are two possible harms that are suffered, but the exclusion of minorities can have consequences as serious as violence and hatred, which is why intolerance and discrimination are also included in the definition.

The Recommendation R(97)20 of the Council of Europe introduces a broad definition understanding Hate Speech as “all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, (…).”[46] The definition does not limit itself to incitement, but also considers spreading, promoting and justifying racial hatred as sufficient. Other definitions by academics use the phrase “promote hatred”, which, however, carries a less imminent standard than incitement.[47] Although such formulations are generally to be advocated due to the severity that hateful statements may carry, this is hardly compatible with the U.S. understanding. For a successful enforcement of the definition, the standard should thus be kept stricter than the one of Recommendation R(97)20 and other definitions.

Moreover, the likelihood of harm needs to be discussed. Incitement is by definition an inchoate crime.[48] Specifically, the “action advocated through incitement speech does not have to be committed for said speech to amount to  a crime.”[49] This requires some degree of risk of harm to be identified. The Rabat Plan of Action thus suggests requiring a “reasonable probability that the speech would succeed in inciting actual against the target group”.[50]  In the U.S. suppressing Hate Speech because it has a ‘bad tendency’ to cause harmful effects is prohibited, whereas in most other countries the connection between speech and harm can be rather loose.[51] Brandenburg v Ohio requires the speech to be “likely to incite or produce such action”[52] in order to be prohibited, which is understood to only apply when incitement is imminent, nearly inevitable.[53] The linkage between cause/speech and harm should not be interpreted too loosely; the causation should rather be direct.[54]


(4) Intent/purpose


In addition to the aforementioned aspects, the intention of the speaker is important. Article 20 ICCPR anticipates intent,[55] while, pursuant to Article 4 ICERD, the mere “dissemination of ideas based on racial superiority or hatred” is punishable. Here too, a uniform line must be followed.

In the case of Jersild v Denmark, the ECtHR decided in favour of the journalist that conducted and edited an interview with a radical xenophobic group in Denmark, during which derogatory statements were made.[56] By fining Jersild for publishing racist statements, the Danish authorities violated Article 10 ECHR. In order to avoid further cases such as this, some sort of requirement of intent should be included. This would be in line with the U.S. approach to protecting the freedom of expression as it limits the possibilities for speech to be considered Hate Speech.

Ward defines Hate Speech as “any form of expression through which speakers primarily intend to (…) incite hatred against their targets.”[57] He argues that, in this way, it is taken into consideration “whether the speakers’ desire to communicate ideas” other than hate outweighs “the desire to injure their victims.”[58] Marwick/Miller require the “speaker’s message to only promote hatred (…) without communicating any legitimate message.”[59] They do not require direct intent, but question the purpose of the speech. Here, again, the special problem arises that intent online is difficult to identify.[60] Instead of using a strict intent requirement, one could require that the speech has “no redeeming purpose”.[61] Thus the use of Hate Speech for other purposes such as journalism, education or similar is still allowed.[62] Proving such use is easier, so that the evaluation of Hate Speech does not fail because of intent.


(5) In public


In order to be understood as Hate Speech, the speech should be by definition made in public. A statement in a private space, e.g. at home, should continue to be protected as long as it is not immediately directed against a person with protected characteristics. Otherwise this would go too far and would not be compatible with U.S. ideologies. It rather leads to a regulation of the freedom of thought.[63] Here, the special nature of the internet should be considered. Direct messages may superficially be considered private. However, exchanges in forums to which one can gain access by signing up and publication of private conversations cannot be protected under the guise of privacy; they need to be seen as public speech. Some private conversations even take place on Twitter for everybody to publicly follow. This cannot be considered “private” in this context as each user has the chance to read and further disseminate the messages.[64] A publicity requirement such as this is already laid down in individual legal texts, e.g. the Canadian Criminal Code.[65]


(6) Context/medium


Hate Speech can be expressed in many different ways. Regardless of the format, Hate Speech can be expressed in “offensive, angry, abusive and insulting language”, but at the same time also “subtle, moderate, nonemotive, even bland; its message conveyed through ambiguous jokes, innuendoes, and images.”[66] This is, however, not the only clue into the nature of Hate Speech. As stressed before, there are different approaches to protecting freedom of expression. In general, there are cultural differences which must be able to be taken into account despite a uniform definition.

Facebook discusses the importance of context in determining Hate Speech and explains that using the term ‘fag’ can be offensive, while in Britain it is a common designation for cigarette.[67] On a more striking level, in Germany the downplaying of acts committed under the rule of National Socialism are especially fined.[68] Every culture has their own social and historical understandings which must be reflected in the definition. The European Court of Justice considered the “everyday language of the terms (…), while also taking into account the context in which they occur (…)” when interpreting Article 22a AVMSD.[69] The Rabat Plan of Action, as a multi-stakeholder process, recognises the great importance of context. It states that “[a]nalysis of the context should place the speech act within the social and political context prevalent at the time the speech was made and disseminated”.[70] It also highlights that context can serve as indicator for a person’s intent.[71]

Furthermore, the medium chosen also deserves recognition when assessing Hate Speech. The ECtHR stressed this fact, inter alia, in the decision of Gündüz v Turkey where the statements were made during a public live broadcast.[72] By spreading hate messages through media, they can have an even more serious impact on individuals due to their larger outreach.[73] If a hate message is posted online, it is accessible worldwide by millions of people. It might be deleted on one website, but could have been shared dozen times or might even be available on a new website by then. Until the message is deleted from there, a new process must be undergone, which takes time and allows further dissemination.

These requirements should not be included in the main body of the definition, but should nevertheless be part of an international treaty in order to provide guidelines for decision-makers such as courts or intermediaries that are obliged to delete illegal content under the law or pursuant to their own Codes of Conduct when assessing the proportionality.


(7) Definition


Taking into account the above discussed, the following definition can be proposed:

Hate Speech are all forms of expression that are made without redeeming purpose and are likely to incite to violence, hatred, intolerance, or discrimination, directed at a certain or identifiable group or member thereof distinguished by protected characteristics. The speech must be made either publicly or immediately directed at a member of a protected group.

Protected characteristics are race, religion, ethnicity, national origin, sex, gender, gender identity, sexual orientation, and disability, or any other trait that can be associated with systemic discrimination or marginalization.

In determining the proportionality, decision-makers shall consider, inter alia, the context and medium of the speech.

To keep the definition understandable, it was divided into individual sentences and the protected characteristics were listed separately. It captures the points that have been discussed over the course of the essay and can thus be used as international benchmark.

Nevertheless, the definition is not unassailable. The foregoing analysis highlights the challenges to drafting a definition for an international treaty with participating parties that have differing understandings of the freedom of expression. For example, the definition remains open to interpretation due to its open-ended wording. A uniform application of the definition in different legal circles can thus not be granted. U.S. American courts might still enforce the definition less strictly than, for example, European courts. However, such risk of slight deviations is to be accepted in order to find a definition that is internationally recognised. In any case, the definition ensures a common ground for addressing Hate Speech. Moreover, it gives legal certainty to social networks operating internationally. In this way, they can enforce a uniform notion of Hate Speech in their networks world-wide and do not have to fear prosecution by authorities for not complying with their legal duties.




Because of the different understandings of freedom of expression and thus also of the notion ‘Hate Speech’, a definition is favoured at international level. In particular, due to the new challenges that the online world presents to us as the “new frontier”[74] for the dissemination of Hate Speech messages, the definition should be applied cross-jurisdictionally. An internationally binding definition of Hate Speech will not only help courts in deciding on the legitimacy of speech; it will also act as a step before intermediaries such as social networks that are held responsible for the spread of Hate Speech on their platforms. Hate Speech is by now seldomly available in only one jurisdiction and, as proven by the Yahoo! case, this can lead to difficulties in enforcement. A uniform definition will facilitate the unified fight to combat Hate Speech.

In its current wording, the here-specified definition sets out clear requirements, but at the same time is open to contextual dynamics. This is particularly important in order to appreciate the specificities of each culture. As stressed by McGonagle, “[t]he approach must combine sensitivity and strategy in order to conduct a fine balancing act between freedom of expression and minority rights.”[75] The different levels of harm that can be caused by Hate Speech are not thereby overlooked, while freedom of expression continues to retain its full strength. States and private players should jointly ensure effective implementation of the definition in order to combat Hate Speech and address the challenges that come with the online environment.


*LL.M. in Innovation, Technology and the Law at the University of Edinburgh Class of 2020; trainee lawyer at the Regional Court of Cologne, Germany.

[1] Netzwerkdurchsetzungsgesetz 1 September 2017, BGBl I 2017 S3352.

[2] R Post, “Hate Speech”, in I Hare and J Weinstein (eds), Extreme speech and democracy (2009) 123 at 134 et seq.

[3] A F Sellars, “Defining Hate Speech” The Berkman Klein Center for Internet & Society Research Publication Series. Research Publication No. 2016-20 (2016) 4, available at

[4] T McGonagle, “Minorities and online ‘Hate Speech’: a parsing of selected complexities” (2010) 9(1) European Yearbook of Minority Issues Online 419; M Oetheimer, “Protecting freedom of expression: the challenge of Hate Speech in the European Court of Human Rights” (2009) 17(3) Cardozo Journal of international and Comparative Law 427 at 428 et seq.

[5] International Covenant on Civil and Political Rights art 20(2).

[6] Report of the United Nations High Commissioner for Human Rights on the expert workshops on the prohibition of incitement to national, racial or religious hatred, “Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence”, 5 October 2012.

[7] I Gagliardone, D Gal, T Alves, G Martinez, “Countering online Hate Speech” (2015) UNESCO Series on Internet Freedom 7.

[8] T Mendel, “Does international law provide for consistent rules on Hate Speech?”, in P Molnar and M Herz (eds.), The content and context of Hate Speech: rethinking regulation and responses (2012) 417 at 418.

[9] I Gagliardone et al (n7) at 10.

[10] J Banks, “Regulation Hate Speech online” (2010) 24(3) International Review of Law, Computers & Technology 233 at 236; for example the EU made huge efforts to include Hate Speech in the Council of Europe’s Convention on Cybercrime, which was then amended because of refusal by the U.S.

[11] M Rosenfeld, “Hate Speech in constitutional jurisprudence: a comparative analysis”, in P Molnar and M Herz (eds.), The content and context of Hate Speech: rethinking regulation and responses (2012) 242 at 247.

[12] Ibid.

[13] Ibid at 259.

[14] R.A.V. v. City of St. Paul [1991] 505 U.S. 377.

[15] Virginia v Black [2003] 538 U.S. 343.

[16] M Rosenfeld (n11) at 267.

[17] La Ligue Contre La Racisme et L’Antisemitisme (LICRA) and Union Des Etudiants Juifs De France (UEJF) v. Yahoo! Inc. and Yahoo! France [2000] Tribunal de Grande Instance de Paris, 22 May 2000 and 22 November 2000; J Oster, “European and international media law” (2017) ch. 3 at 98.

[18] Yahoo! Inc. v. La Ligue Contre Le Racisme et l’antisemitisme (LICRA) [2006] 433 F.3d 1199.

[19] J Banks (n10) at 233.

[20] I Gagliardone et al (n7) at 15.

[21] The EU Code of Conduct on countering illegal Hate Speech online (2016), available at

[22] European Commission, “Factsheet – 4th monitoring round of the Code of Conduct” (2019), available at

[23] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, “Tacking Illegal Content Online – Towards an enhanced responsibility of online platforms” COM(2017) 555 final (28 September 2017) 4.

[24] The Code of Conduct refers to the definition developed in the framework decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it: “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

[25] T Mendel (n8) at 428.

[26] See for example: Handyside v UK [1976] App No 5493/72, Council of Europe: European Court of Human Rights.

[27] Handyside v UK [1976] App No 5493/72, Council of Europe: European Court of Human Rights at para. 49.

[28] As in R.A.V. v City of St. Paul and Virginia v Black.

[29] National Socialist Party of America v. Village of Skokie [1977] 432 U.S. 43.

[30] M J Matsuda, “Public response to racist speech: considering the victim’s story” 87(8) Michigan Law Review (1989) 2230 at 2365 et seq.

[31] J Waldron, “Why call Hate Speech group libel?”, in J Waldron, The harm in Hate Speech (2012) 34 at 37.

[32] T McGonagle (n4) at 423.

[33] Example for an equal protection clause: Universal Declaration of Human Rights art 7; the ICCPR instead follows a group-based approach; see: P Cavaliere, “Digital platforms and the rise of global regulation of Hate Speech” 8(2) Cambridge International Law Journal (2019) at 282, available at SSRN:

[34] E Heinze, “Viewpoint absolutism and Hate Speech” 69(4) Modern Law Review, (2006) 543 at 565; P Cavaliere (n33).

[35] Terms and Policies for Google+, available at

[36] A F Sellars (n3) at 25.

[37] B Parekh, “Is there a case for banning Hate Speech?” in P Molnar and M Herz (eds.), The content and context of Hate Speech: rethinking regulation and responses (2012) 37.

[38] Tucholsky II [1995] case no. 1 BvR 1476/91, Bundesverfassungsgericht (German Constitutional Court); M Rosenfeld (n11) at 270 et seq.

[39] P Cavaliere (n33).

[40] I Gagliardone et al (n7) at 54; see also: A F Sellars (n3) at 28.

[41] I Gagliardone et al (n7) at 54.

[42] Brandenburg v Ohio [1969] 395 US 444.

[43] P Cavaliere (n33); see for example Directive 2010/13/EU OJ  L 1995, 15.4.2010 (Audiovisual Media Services Directie – AVMSD) art 6.

[44] Ibid.

[45] As argued by the Yugoslavian representative in U.N. General Assembly, Third Committee, U.N. Doc. A/C.3/SR.1079, 20 October 1961, para. 9; S Farrior, “Molding the matrix: the historical and theoretical foundations of international law concerning Hate Speech” 14(1) Berkeley Journal of International Law (1996) 1 at 26.

[46] Recommendation No. R. (97) 20 of the Council of Europe Committee of ministers to member states on “Hate Speech”, 30 October 1997.

[47] A F Sellars (n3) at 17.

[48] Report of the United Nations High Commissioner for Human Rights on the expert workshops on the prohibition of incitement to national, racial or religious hatred (n6) at para. 29(f).

[49] Ibid.

[50] Ibid.

[51] R Post (n2) at 134 et seq.

[52] Brandenburg v Ohio [1969] 395 US 444 at 447.

[53] A F Sellars (n3) at 28.

[54] Report of the United Nations High Commissioner for Human Rights on the expert workshops on the prohibition of incitement to national, racial or religious hatred (n6) at para. 29(f).

[55] Ibid at para. 29(c).

[56] Jersild v Denmark [1994] European Court of Human Rights, Grand Chamber, App. No. 15890/89.

[57] K D Ward, “Free speech and the development of liberal virtues: an examination of the controversies involving flag-burning and Hate Speech” 52(3) University of Miami Law Review (1998) 733 at 765.

[58] Ibid.

[59] A Marwick and R Miller, “Online harassment, defamation, and hateful speech: a primer of the legal landscape” CLIP Report (2014) 17.

[60] A F Sellars (n3) at 28.

[61] Wording used in A F Sellars (n3) at 30 et seq.

[62] See also YouTube’s Hate Speech policy, available at:

[63] A F Sellars (n3) at 29; see for example Stanley v Georgia [1969] 394 U.S. 557.

[64] A F Sellars (n3) at 29.

[65] s 319(3); also the Australian Racial Discrimination Act 1975 s18C(2).

[66] B Parekh (n37) at 41 et seq.

[67] R Allan, “Hard questions: who should decide what is Hate Speech in an online global community?” (2017) Facebook Newsroom, available at:

[68] Strafgesetzbuch (German Civil Code) s 130(3).

[69] Case C-244/10 Mesopotamia Broadcast A/S METV and Case C-245/10 Roj TV A/S v Bundesrepublik Deutschland [2011] ECR 2011 I-08777 at para. 40; R Craufurd-Smith et al., Media law text, cases, and materials (2014) ch. 5.9 Content requirements I: The protection of minors and Hate Speech at 221.

[70] Report of the United Nations High Commissioner for Human Rights on the expert workshops on the prohibition of incitement to national, racial or religious hatred (n6) at para. 29(a).

[71] Ibid; see also: R Allan, (n67).

[72] Gündüz v Turkey [2003] ECtHR App. no. 35071/97 at para. 49.

[73] Recommendation No. R. (97) 20 of the Council of Europe Committee of Ministers to Member States on “Hate Speech”, 30 October 1997; similar: M Rosenfeld (n11) at 281.

[74] J Banks (n10) at 234.

[75] T McGonagle (n4) at 440.