Navigation – Plan du site

AccueilNuméros38En VOA Feature-Based Approach to Asses...

En VO

A Feature-Based Approach to Assess Hate Speech in User Comments

Une approche multicomposante pour analyser le discours haineux dans les commentaires en ligne
Liane Reiners et Christian Schemer
p. 529-548

Résumés

Les discours haineux dans les commentaires des utilisateurs en ligne constituent un défi pour les médias, les plateformes, les autorités juridiques et le grand public. Cette forme de communication peut nuire à la qualité des discussions en ligne, mais aussi empoisonner le discours public et la vie civique. Ceci explique la pléthore d’études venant de différentes disciplines traitant ce sujet. Cependant, la diversité des disciplines et des approches complique une compréhension commune du discours haineux. Souvent, les définitions sont trop larges car elles incluent des facteurs tels que l’intention du commentateur ou les conséquences pour les groupes stigmatisés. En outre, les catégorisations binaires (haine/sans haine) appliquées dans certaines disciplines, ne tiennent pas compte du fait que les discours haineux oscillent en intensité, et ignorent les différences qualitatives des dimensions de ce dernier. La présente recherche propose une approche pour analyser les discours de haine dans les commentaires en ligne en se concentrant sur quelques aspects manifestes de celui-ci (labélisation de groupes, jurons, imputation de traits et d’actions, recommandations de traitement). Son utilisation est démontrée dans une étude pilote.

Haut de page

Texte intégral

1Hate speech, verbal aggression and other forms of dangerous speech in audience comments are a serious challenge for journalists, media actors, platform owners, but also for the general public (Ksiazek, Springer, 2018). These styles and contents of communication can harm the discussion quality in comment sections, but also public discourse and civic life in general (Herbst, 2010). Researchers and practitioners have developed various strategies to tame hate speech and to encourage users to participate in a constructive way (Domingo, 2011; Ksiazek, Springer, 2018). Although these studies provide valuable insights into what practitioners can do with various forms of communicative atrocities in comment sections, it is often unclear what the target of moderation exactly is. In other words, in many disciplines, research on the classification of hate, dangerous speech, verbal aggression, incivility and many other related phenomena has produced a cacophony of concepts that is difficult to disentangle. For this reason, it is important to clarify hate speech, in order to assess its prevalence and the extent to which it is amenable to change, e.g., by means of prevention programs, moderation, or censorship. While quite narrow definitions result in low estimates of hate speech, broader conceptions increase its pervasiveness. For instance, hate speech defined as the use of ethnic slurs have resulted in a prevalence of less than 1% of tweets as being hateful (Chaudhry, 2015) while hate speech understood as encompassing antagonistic and offensive expressions have resulted in 12% hate in tweets (Burnap, Williams, 2015).

2An additional problem is that researchers tend to pragmatically rely on dichotomous conceptions of hate speech, i.e., either there is hate speech in a user comment or there is none. There are of course more sophisticated approaches that rely on a unidimensional gradual scale to assess the intensity of hate speech, i.e., how much hate speech is inside a given post. The present research gives an overview of these different stands in the field of hate speech and reviews existing research to demonstrate the shortcomings of these categorical or unidimensional definitions and operationalizations. Against this backdrop, we propose an alternative way of conceptualizing hate speech for the analysis of small and large quantities of text. Specifically, this premise is feature-based implying that hate speech is not seen as a holistic unity, but as emerging in different facets and forms in communication. Based on these features or a combination thereof, researchers and practitioners can create hate speech intensity scales that can also be used to explore the structure of hate speech. We end this essay with the results and the discussion of a pilot study where the approach was tested.

Different Understandings in the Study of Hate Speech

3There are many terms describing the phenomenon of hate speech. However, there are also differences in how practitioners and researchers define it. Hate speech is a kind of an “empty signifier” (Gagliardone et al., 2015: 55): at first glance, it seems completely clear what it means, but descriptions and definitions will most likely vary a lot depending on who you ask. Various disciplines, such as linguistics, computer science, media or legal studies investigate various forms of uncivil, harmful, inflammatory or hateful online communication. The variety of disciplines is certainly one aspect that explains the existing diversity regarding the definition and approaches in research.

What is Hate Speech?

4Most definitions of hate speech imply that an author of communication categorizes a social group as having negative characteristics, being harmful, or as acting in harmful ways. The feature that distinguishes hate speech from other forms of negative online communication, e.g., incivility, cyberbullying, flaming or trolling, is the reference to a collective: It can be directed against a group or a single person that is being seen as a part of this collective (Delgado, Stefancic, 2004; Erjavec, Poler Kovačič, 2012; Gagliardone et al., 2015). This categorization can be based on people’s “race” or “ethnicity”, religion, sexual orientation, gender, age or other features (Erjavec, Poler Kovačič, 2012). Some researchers concentrate particularly on (hate) speech that targets only disadvantaged social groups (Jacobs, Potter, 1998), others extend the concept and include derogatory speech directed against people based on any attribute as e.g., their political conviction or their job, for example journalists (Obermaier et al., 2018). Apart from this mostly consistent agreement on the necessary categorization, the definitions vary a lot in terms of what else qualifies as hate speech. For some scholars the harm done to the targeted group is important (Walker, 1994), for others it refers to any “expression that is abusive, insulting, intimidating, harassing, and/or incites to violence, hatred, or discrimination” (Erjavec, Poler Kovačič, 2012: 900). In the U.S., the term is tied to an intense discussion about the freedom of speech. So, the range of speech acts that can qualify as hate speech is quite broad.

5The goals and practices of research diverge considerably in various disciplines: It makes a substantial difference whether hate speech is seen as a hands-on problem, which is approached e.g., with a rule-based automatic solution similar to spam filters, or if a legal framework must be established that is broad enough to be applicable to various situations, but still distinct enough to not crop other rights. Although especially legal definitions and aspects of hate speech are an important issue to discuss, they cannot be fleshed out in detail in this essay as they are related to legal systems and vary from country to country. While for instance, in the U.S. Holocaust revisionism or more precisely the denial of it is protected by the first amendment (Kahn, 2004), in Germany it is a criminal offence, even with a particular paragraph in the criminal code (Meibauer, 2013). This also illustrates the role and influence of historical and cultural legacy for the perception and definition of hate speech. The present research discusses the problems related to definitions and operationalizations of hate speech manifestations in public communication in social sciences, humanities and computer linguistics.

The Study of Online Hate Speech in Different Research Disciplines

6One of the main differences between approaches to the study of hate speech can be explained by whether the scientific interest is driven by theory or inspired by the social problems of the phenomenon itself. While computer science is especially interested in the automatized detection of hate speech to provide tools for detection and filtering, communication researchers often choose a deliberative approach considering the consequences for society and the discussion culture. In addition to this disciplinary variety, there are also different methodological approaches to examine hate speech. For instance, content analyses focus on the manifest content in user-generated communication. Additionally, surveys look at how ordinary internet users perceive or are affected by hate speech. Finally, experimental studies examine the effects of exposure to hate speech on people’s attitudes toward stigmatized groups or perceptions of outlets in which hate speech occurs.

7The present essay focuses on the manifest and explicit content of communication that can be qualified as hate speech irrespective of its effects on readers or members of stigmatized groups. The following chapter proceeds in two steps: firstly, we discuss automatic approaches especially used within computer science; secondly, we review approaches relying mostly on trained human coders as practiced within social sciences.

Hate Speech in Computer Science Research

8Research in computer science addresses phenomena, such as toxic comments (van Aken et al., 2018), offensive language (Chen et al., 2012), abusive speech (Nobata et al., 2016), or profanity (Sood et al., 2012). Hate speech is frequently – but not always – used as a synonym. Equating hate speech with offensive language is insofar problematic as a comment can be offensive without referring to a target group. When scholars consider this group reference in their definitions, they often concentrate on explicit racist terms or consider other characteristics (e.g., religion or sexual orientation) that are used to stigmatize groups (e.g., Davidson et al., 2017; ElSherief et al., 2018; Silva et al., 2016). But researchers who use ethnic slur lists seldom distinguish between user-generated communication addressing individuals or generalizing derogatory speech to a whole social group. The main goal of computer scientific research is the automatic detection of hate speech in large quantities of data. To classify hate speech, one can choose a deductive approach, using dictionaries consisting of lists of ethnic slurs or category-specific swearwords. An issue regarding these keyword-based approaches is the nature of language. It is never static, the degree of formality and tone changes depending on the platform where user-generated content is published. Additionally, users adapt their writing or choice of words quickly to avoid censorship.

9Closely related are rule-based methods that take syntactic (or semantic) information into account (Scharkow, 2017). They are often combined with dictionary approaches, e.g., in Silva et al. (2016): With part-of-speech tagging and the basic expression “I <intensity> <user intent> <hate target>”, they have identified common hate targets on Twitter and Whisper. The main targets were the race and the behavior of people (e.g., stupidity, sensitivity, ungratefulness) with a joint share of 86% (Twitter) and 55% (Whisper) over all hate instances. Although automatic approaches can provide valuable insights into hate speech in a vast amount of content, there are also some shortcomings that should be acknowledged. For instance, dictionaries need constant updating and even considering syntactical information does not help to reliably detect more than explicit content. Additionally, hate speech masked by humor or implicitly phrased stigmatizing statements go unnoticed.

10An inductive approach in terms of supervised machine learning is the development and use of classifiers. For instance, data annotated by human coders as hateful or not hateful (or more fine-grained categorizations) serve as training material for a classifier that is used to predict the probability of unlabeled data being hateful or not. This way loads of texts can be automatically searched for hate speech. This task is impossible to accomplish for humans. However, the accuracy of the classifier is dependent on the quality of the annotations in the training material. This labeling of training data is done by human annotators who categorize comments as hateful. For instance, in Davidson et al. (2017) crowd workers labeled 25,000 tweets containing hateful terms as one of three categories, i.e., hateful, offensive, none of each. The annotators relied on a definition of hate speech as “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group” (ibid.: 512). The annotators were also instructed to take the context into account. Each tweet was distributed to at least three persons and the majority decision per tweet was assigned as a label. The study found 5% hateful tweets, but only 1% were annotated unanimously. This supports the argument that hate speech detection is not easily done based on a categorical decision of ordinary readers.

11Frequently, annotators are not briefed or informed about specifications or definitions when deciding whether a given post is hateful or not. Therefore, the reliability of annotations is often poor (Ross et al., 2016). Also, classifiers need a lot of training material (in the specific language), but pre-labeled text corpora vary considerably according to the type of derogative speech and targets of hate speech. Finally, classifiers are also platform specific, i.e., they are influenced by different lengths of text on different platforms or different styles of writing in tweets or user comments on newspaper websites.

12Taken together, the first step in computer science research is always to check for the presence of a specific type of “negative” language. Some researchers also aim for more fine-grained distinctions by considering different forms of offensive language, e.g., insults, profanity or abuse (Risch et al., 2019) or by distinguishing between offensive speech, hate speech or neither nor (e.g., Davidson et al., 2017). Regarding the form in which hate speech can be communicated, there are only a few studies taking into account an implicit bias (Moon et al., 2020) or implicit offensive language (e.g., Risch et al., 2019), so far with poor recall results. Lastly, ElSherief and colleagues (2018) focus on differences in how users address targets of hate speech, i.e., whether they refer to a specific individual or an entity (direct) or a particular community or group (generalized). Overall, even a look at computer science research alone illustrates the breadth of the research field of automatic methods to assess hate speech.

Hate Speech in Social Sciences and Humanities

13There are also overlapping concepts and different terms regarding “negative” online communication in social sciences and the humanities. Some disciplines consider hate speech as a form of harmful speech and depending on the conceptualizations focus either on the harm to groups/individuals, the intent of the speaker or the content of the speech, partly factoring in the context as well (Faris et al., 2016). Another approach in this field of online communication focuses on the concept of incivility. Broadly speaking, this concept covers disrespectful behavior in online discussions toward the discussion forum, its participants, or its topics, and can consist of elements like name-calling, aspersion, lying, vulgarity, or pejorative speech (Coe et al., 2014). Often scholars differentiate between incivility that is personally or impersonally expressed, and also take the degree of negativity in a statement into account, e.g., rudeness or extreme incivility (Su et al., 2018). One could argue that hate speech is a special form of incivility represented verbally by name-calling, aspersion, etc. with the difference that it must be directed against a specific target group and not just an individual.

14Methodically, there are numerous approaches which vary in terms of terminology, level of analysis, and corpora under investigation. For instance, while linguistic scholars focus on semantic and syntactic units at the level of words, for communication researchers the smallest analytical unit is often a sentence or an attribution. Therefore, this essay will concentrate only on the most popular social sciences approaches for analyzing text, that is (quantitative) content analysis. More specifically, researchers develop a theoretically deduced research instrument, a so-called codebook (also known as an annotation protocol), which covers all important aspects and allows to investigate manifest content in a systematic, objective and quantitative way (Berelson, 1952; Krippendorff, 2018). Often, there are multiple units of analysis, e.g., the news article or the social media post and the responding comments, and researchers try to consider the context in which a hateful message arises or study the interactions between different units. Trained coders apply the codebook to a specific corpus, e.g., readers’ posts and replies in comment sections of a newspaper website. Unlike annotations performed in computer science, coders have not just one task per message as e.g., labeling a comment as hateful or not. Most often they go through multiple categorizations (e.g., target of hate speech, recommendation what to do with the target, reference to other comments), with multiple options (e.g., targets: homosexuals, refugees, politicians; recommendations: non-violent treatment, violent treatment, killing) to annotate a given item. The annotation protocol also offers rules on how to apply the given categories and how to choose from options. Therefore, researchers invest time in intensive coder trainings where problematic or ambiguous cases are discussed. In this training process, all coders learn to apply the codebook rules with the aim to achieve valid and reliable categorizations of communication content.

15Next to the classical quantitative content analysis (e.g., Gerstenfeld et al., 2003; Harlow, 2015) there are numerous mixed method approaches: For instance, scholars combine quantitative and qualitative analysis to investigate discussants’ profiles or special characteristics of hateful messages (e.g., Awan, 2016; Erjavec, Poler Kovačič, 2012). Other researchers study offline trigger events fueling hate movements across countries, for instance against Muslim people (Hanzelka, Schmidt, 2017). In a random sample of the Czech Facebook page of “Initiative against Islam” there were 20% hateful comments (13% verbal insults, 7% violent content), while for the German equivalent Pegida only 7% of the comments were hateful (5% verbal insults, 2% violent content). In another study, combining automatic and manual quantitative content analysis, Bahador and Kerchner (2019) used a keyword search (focusing on targets and negative words/phrases) to identify hate speech in U.S. political talk/news shows. Human coders then rated the content on a hate speech intensity scale ranging from simple disagreement to death threats. Ben-David and Matamoros-Fernández (2016) combined a network analysis with a multimodal content analysis of text, images, and links to assess hate speech and covert discrimination on Facebook pages of Spanish extreme-right political parties. Concentrating on e.g., extremist Facebook sites or right-wing Twitter accounts or using a negative keyword search guarantees that researchers find enough hate speech that can be further examined. However, scholars cannot infer anything about the general share of hate speech, only about content-related information e.g., the most common reappearing words or the proportion of different forms of hate speech. Also, approaches exclusively relying on human coders can only investigate a limited amount of data. This limits generalizations of studied content to other corpora or inferences referring to the prevalence of hate on a broader level.

16Taken together, there are different approaches to study hate speech which vary in terms of definitions, sampling strategies, data collection methods or techniques for data analyses. While automatic methods are indispensable to draw inferences with respect to the actual share of hate speech on social media platforms or media outlets, one must bear in mind that not all forms of hate speech can be detected. To improve the accuracy and reliability more training material coded by trained coders is crucial. But these training corpora are only useful if studies investigating hate speech have a common definition of the phenomenon that also guide the decisions of human annotators. To tackle this challenge there are a few interdisciplinary projects (e.g., NOHATE1, M-PHASIS2) combining knowledge from computer and social sciences. Strippel et al. (2020) for instance, present hateful user comments annotated in a modularized way distinguishing insults, generalizations, dehumanizing descriptions or violent implications. Data annotated in this way can be used by other researchers as training material for automatic detection devices that can handle a larger amount of unlabeled data. Hence, the present essay proposes an alternative conceptualization of hate speech that analytically distinguishes features of hate speech that can be combined to build a gradual hate speech scale or to examine facets or clusters of hate speech.

Conceptualization of Hate Speech: A Feature-Based Approach

17A categorical understanding of hate speech always results in a binary categorization of communication content as hateful vs. not hateful. This may be useful for a community management system of a newspaper website. However, for academic research this is problematic since hate can be expressed in different forms that are lumped together into a joint category. Additionally, a binary categorization fails to consider qualitative differences of hate speech and ignores the gradual intensity of negativity with respect to targets. Hence, without clear-cut criteria it is difficult for annotators to categorize specific speech acts as hateful. This arbitrariness also explains poor reliability scores in annotations. Additionally, definitions frequently are very broad as they include e.g., “all forms of expression which spread, incite, promote or justify […] hatred” (Council of Europe: Committee of Ministers, 1997: 107), the intention of an author of a hateful message (Sponholz, 2018) or the harm done to the targeted group (Walker, 1994). While the motivation or intention may be important criteria in terms of legal consequences or implications for society, they cannot be an integral part of an operationalization for a content analysis as it would leave too much room for interpretation by coders. Put simply, coders cannot know whether user comments can elicit hate in other readers and they are unlikely to agree in this perception with other coders. Also, the level of hate is to some degree subjective as the perceived harm of a statement is dependent on coders predispositions (e.g., the gender, cf. Cowan, Khatchadourian, 2003; Wojatzki et al., 2018).

  • 3  Disclaimer: The examples do not reflect the views of the authors and exclusively serve to explain (...)

18Instead, a feature-based understanding of hate speech, similar as it is already done in research for concepts like incivility in social sciences or offensive speech in computer science, is clearly to be favored. In the following, we propose some necessary and sufficient conditions that must be fulfilled for a statement to be considered as hate speech. All definitions of hate speech have in common the explicit or implicit link to a target within public statements that are negative in nature. Specifically, the negative evaluation of a person, or the assignment of negative labels or characteristics to a person based on the alleged class, category, or social group is a necessary condition of hate speech. Social categorization per se provides a “system of orientation which creates and defines the individual’s own place in society” (Tajfel, 1974: 69) and so, people categorize themselves (as well as others) into different social groups. If feelings or attitudes towards such a social group result in unjustifiably negative or generalizing behavior – and speaking is a form of acting – then this speech act qualifies as discrimination (Fiske, 1998) and as hate speech. Verbal discrimination of others is accomplished by separating “us” from “them” (ingroup vs. outgroup) and treat “them” as (typical) members of an outgroup or instances of a social category in a depreciatory-derogatory manner (Graumann, 1998: 50). This understanding of hate speech implies that any form of group-motivated negativity is a necessary condition for communication to be categorized as hate speech. Advocating violence, aggression, and even killing is certainly a sufficient condition to qualify a communication as an extreme form of hate speech. However, such treatment recommendations represent rare cases at least in public communication. Against this backdrop, just insulting a person is not considered as hate speech, unless it is seen as typical group behavior (e.g., applying classical prejudices as “What is he, Polish? No wonder he stole it.”3). To be clear, the present approach does not approve of impolite or uncivil user comments; it is important though to distinguish hate speech from other concepts that can, but do not necessarily need to, overlap with hate speech.

Features of Hate Speech

19The negativity toward a group or its members can be phrased in many ways: On the one hand, it can be expressed in a very blatant, explicit form, on the other hand, there are numerous ways to implicitly express negativity. While explicit statements are usually understood as discriminatory regardless of the speech situation, for implicit expressions of hate the circumstances of the situation and the context can be important (Graumann, Wintermantel, 1989). For the present approach the following features are among the most important components to categorize communication as hate speech: group-related labels and swear words, trait and action attributions (e.g., stereotypes, metaphors or narratives), treatment recommendations and the explicit expression of feelings.

20Group-related swear words: The most obvious form of hate speech comprises swear words that already include group membership. These can be, for instance, slurs targeting the sexual orientation or gender identity of a person, disabilities of persons or a special form of slurs, so-called ethnophaulisms, i.e., pejorative names or designations for people belonging to an ethnic group. Such labels are usually based on observable phenomena like physical traits or (culturally determined) behavior patterns (Nuessel, 2008). The range of expressions is wide in forms and intensity: Examples are typical names referring to a (supposed) representative of a specific region (“Ali” or “Muhammad” for people from the Middle East), stereotypes expressed within descriptive terms (e.g., “camel jockey”, “towel head” specifically referring to Sikhs, Arabs or Muslims) or clear ethnic slurs.

21Negative action and trait attributions: Other features of hate speech encompass attributions referring to negative actions or character traits. Part of the social identity theory is the mechanism that the ingroup is perceived as a group of heterogenous individuals and the outgroup as homogeneous (Fiske, 1998) which leads to generalizations and stereotyping, e.g., “Muslim men do not respect women” or “all Muslims are anti-Semites”. A special form of expressing those negative attributions is the use of specific narratives or metaphors. For instance, migrants are frequently represented as a danger, burden, disease, dirt or through other subhuman metaphors (Assimakopoulos et al., 2017: 33, 40). Another example of metaphoric framing would be the equation of refugees with natural disasters as e.g., a migration flood or an asylum tsunami, phenomena that require containment measures. This can be linked to another psychological phenomenon called dehumanization. There are different conceptualizations of dehumanization, but broadly speaking, it describes the perception of a person or a group as lacking humanness (Haslam, Loughnan, 2014). It can be expressed in an animalistic form, e.g., when social groups are equated with animals, e.g., worms, rats, pigs etc., or in a mechanistic form, by contrasting between humans and inanimate objects (ibid.: 405).

22Treatment recommendations and calls to collective action: User-generated content can also advocate specific treatments of individuals or a social group. Such treatment recommendations or calls to collective action deal with what should be done with a targeted group. Different emotions or emotional impulses toward a group are associated with different action tendencies (Fischer et al., 2018). These treatment recommendations depend on how the targets are perceived. When targets are perceived as threatening, then commenters are likely to recommend distance between the outgroup and the ingroup. Such distancing can come in different forms: e.g., border controls should be enforced, members of threatening outgroups should be deported, punished or put in jail. Some of the treatment recommendations can be non-violent: e.g., ethnic minorities should adapt to the culture of the host country. Others, however, may involve violent or even lethal action. When user comments advocate violence, this recommendation is frequently sold as self-defense that justifies any treatment of outgroups. Rarely do commenters approve of lethal action in public communication in Western democracies. However, in humanitarian crises extreme hate speech may be more prevalent and can incite real-world aggression and violence (Whitten-Woodring et al., 2020).

23Verbal/pictoral expression of emotions: Users can also express negative reactions by just verbalizing how negative they feel, e.g., “The presence of all these refugees in our country makes me sick” or “I hate this government”. These statements do not assign any attribution or directly characterize the target group, but express the user’s own experience with the group, whether this experience is substantiated or not. The expression of emotions or feelings toward a group can also be conveyed by using emojis which are often (but not exclusively) facial expressions particularly popular in electronic texting.

Advantages of a Feature-based Approach

24While the list of features is not exhaustive, they represent important ingredients of hate speech and can be operationalized as categories in a quantitative content analysis. Human coders can check whether and to what extent these features are present in a user comment. Manually coded data can then be used to train algorithms that automatically detect specific features or the presence of hate speech in general. Some of these features were considered in previous research but only recently have scholars begun to examine all these different facets jointly. For instance, Strippel et al. (2020) also used a modular approach dividing negative judgments in categories like insults, generalizations, violent implications and/or dehumanization to assess hate speech.

25The combination of these features can be used to create a scale of hate speech intensity. For instance, Bahador and Kerchner (2019) propose that a violent action recommendation in a user comment is a case of more intense hate speech than swear words accompanied by negative traits; negative character traits, in turn, outweigh the mere attribution of negative actions. However, in addition to creating a unidimensional hate speech intensity scale, researchers can use this feature-based approach to explore the dimensions and structure of hate speech. Often, commenters make up arguments by presenting outgroups as threatening the ingroup. The latter, presumably in self-defense, needs to punish or kill the outgroup. So, in addition to summarizing atrocities, scholars can examine the argumentative structure of prejudiced commenters. Findings can improve counter-speech strategies that can be tailored to the specific argumentation patterns of hate speech.

Pilot study

26This feature-based approach is applied in the project M-PHASIS. This research project developed an annotation protocol comprising the features of hate speech mentioned before as categories. Students were trained as coders to apply the annotation protocol to user comments on media websites. For the present pilot study, we relied on 4526 user comments related to 31 news articles on the German website of Focus, a right-leaning news magazine that frequently publishes law and order news stories including migrants. The topic of these articles was related to migration since the focus of the M-PHASIS project is on hate speech in the context of migration. First, annotators had to decide whether a given comment included a negative depiction of any target and, if so, to specify the target. For the present analysis, these single targets were collapsed into overarching categories, i.e., migrants, politicians, other state actors (e.g., police, administration), German people, and others. In the next step, annotators had to categorize the reason for the negative evaluation or the blame, e.g., ignorance, criminality, character, ideology. This category is inspired by research on stereotypes related to specific groups, e.g., politicians or migrants. A subsequent step was to annotate action or treatment recommendations: If actors were blamed for something, commenters could come up with claims about what should be done to improve the situation in the future. We distinguished between positive treatment, and various forms of negative treatments, e.g., adaption/change, negative treatment without violence, violent treatment and death/killing. The annotation protocol included additional features of hate speech, e.g., presence of swear words, verbal expression of positive or negative emotions by users, victimization, group contrasts. These features occurred less often and coincided frequently with the categories reported below. Therefore, they are omitted here.

Results

27First, we analyze the targets of negativity, reasons for negativity, and treatment recommendations separately. Out of 4526 user comments, more than 75% attacked a specific target actor or a social group. 32% of all these negative statements referred to political actors, i.e., the government, parties or ministers, followed by 30% of negative statements referring to migrants. Other targets of negativity are other state actors (15%), the German people (3%) and various other actors (20%, e.g., media, international actors and institutions, see also figure 1 for absolute numbers). It was also assessed whether a target was addressed individually, e.g., a specific individual politician or migrant portrayed in the news story, or collectively, e.g., when user blame or generalize their negative statement to a whole group, e.g., “all these criminal migrants”, “politicians in our country do nothing”. The figure shows that the majority of comments generalizes negative evaluations of targets, i.e., address most often groups and less frequently individuals. This occurs most often for Germans as a target followed by politicians and migrants.

Figure 1: Frequency of negative evaluations addressed at individuals vs. groups (generalization, in %; N in brackets are absolute numbers of negative evaluations)

Figure 1: Frequency of negative evaluations addressed at individuals vs. groups (generalization, in %; N in brackets are absolute numbers of negative evaluations)

28What are the accusations that commenters express in these posts? In 70% of comments, blame attributions are uncovered. Targets are blamed for passivity, i.e., doing nothing (29% of blame attributions), conspiracy (19%), criminality (18%), ignorance (13%), or for being a financial burden (7%). Finally, a look at the treatment recommendations reveals that in 14% of the posts, users call for some form of treatment to deal with the problem that they discuss. In 52% of the cases, users advocate change (e.g., “our homeland security must be more vigilant”) and adaption (e.g., “cultural adaption to a host country takes long”). 35% of the treatment recommendations are negative, though no lethal (e.g., expulsions of refugees, suing people according to the rule of law). In only 1% of the comments do users advocate (physically) violent treatment (e.g., threaten to use violence to defend oneself). 1% of the posts recommend killing or tolerance of people dying (e.g., expulsion of refugees who will be victims of the death penalty in their home countries).

29These findings indicate that extreme forms of hate speech, understood as the recommendation to use violence or kill members of social groups are rare incidents in the present case. Moderate or mild forms of hate speech occur frequently. If generalized negativity is conceptualized as the mildest form of hate speech – i.e., the low-level threshold of a hate speech intensity scale – then about 56% of all posts can be categorized as mild hate. Extreme devaluation of groups is lumped together with disappointment with politicians, anger and fear toward migrants, and frustration about the news media. These features can be added up, since the cumulation of negativity is more likely to qualify as hate than any single feature, except for violent treatment of killing. When we look at the number of negative features (i.e., negative target evaluations, accusations, and negative treatment recommendations) in single comments, we find that 14% of all post include only one feature, 64% two and 5% three features. The incidence of swear words is also around 5%, very much like findings from other studies on comments on German news magazine websites (e.g., Boberg et al., 2018).

30In addition to this unidimensional perspective on hate speech, the feature-based approach can be used to reveal structures of hate speech. This can be done by examining frequent patterns or combinations of features that occur in individual posts. The most frequent pattern for migrants as a target group is that they are most often accused of being criminal. This blaming of migrants is in nearly two thirds of the cases associated with the recommendation to bring them to trial and punish them accordingly or to expulse them. Another form of non-violent treatment that users advocate is the adaption to the culture and customs of the host country. These claims represent around 30% of treatment recommendations addressed at migrants. Violence is rarely advocated, but if commenters propose violent treatment or killing, then the targets are most frequently migrants. This pattern of communication reflects the stereotypical perception of migrants as criminals who should not be treated as equal, but as low-status groups who can be punished, expulsed or maltreated.

31A completely different pattern of communication emerges when politicians or other state actors are targeted. Commenters accuse them of being passive, doing nothing to solve social problems. This blame is most often related to the vague treatment recommendation that politicians and other state actors should change and adapt to the reality. This is often expressed in the form of questions, e.g., “when does this state start to protect its citizens”. Since politicians or state actors represent authorities or high-status groups or institutions, commenters cannot simply treat them as inferior by recommending harsh treatment, distancing or violence. Other patterns of features in the discourse are less prevalent since political actors and migrants are the top targets.

Discussion

32Taken together, this feature-based approach can be used to assess various aspects of hate speech and the features can be combined to form unidimensional hate speech intensity scales. These scales can then be used by community managers to flag or moderate user comments once the detection of features is automatized and thresholds of what qualifies as sufficiently hateful are defined. Additionally, the feature-based approach is useful to uncover the structure of hateful speech and to understand the motivation of the audience who generates these comments. When we understand user motivations better, then tailored responses or prevention programs can be put in practice to reduce or channel hateful speech in user comments.

33Although the features mentioned before capture crucial aspects that characterize hate speech, it is important to note that they are not exhaustive. Most of them are explicit verbalizations of attributions, labels or actions. Implicit forms of hate speech are more difficult to assess using this approach because their meaning is not manifest. There are examples of hate speech that are implicitly phrased, and some commenters make use of this implicitness to circumvent censorship. Thus, sometimes “(…) the discriminatory content must be inferred by the recipient (and by the social scientist) from an utterance or, even worse, from a context of utterances which are not discriminatory per se” (Graumann, 1998: 54). Human coders, as they are familiar with the cultural or social context, can interpret the communication situation, can read between the lines, for instance, putting descriptions about “us” and “them” into context. Additionally, hate speech is often more about what is left out or not been said than what is explicitly stated. For instance, a statement such as “there are worse things“ unfolds a completely different meaning depending upon whether it is written below an article e.g., about the result of a soccer game or an ethnic cleansing. This example demonstrates the importance of context to decide whether literally inoffensive statements can be categorized as hateful.

34Additionally, irony and sarcasm can be a special kind of implicit hate speech and a popular weapon of some commenters to avoid censorship on news websites or social media platforms. Code words are another strategy to circumvent filters and censorship. A German annotator can understand that e.g., the alleged positive label “do-gooders (originally: ‘Gutmenschen’)” has become a derogatory term to describe people supporting asylum seekers. However, some implicit statements are open to interpretation and therefore, human coders will often not agree in their perception of these expressions. Cases in which human coders cannot agree or have difficulties to detect implicit meaning that is masked by humor or other rhetorical strategies, are also problematic for automatic procedures. This is a factor impeding the training of a classifier of implicit content (Risch et al., 2019).

Conclusion

35There are numerous approaches and definitions in research on hate speech and related phenomena. Studies of manifest hate speech in verbal forms can improve and be of greater use for different disciplines when applying a feature-based approach. Due to its flexibility, this approach can be helpful even if understandings and operationalizations of hate speech differ. In line with previous definitions, our point is that an explicit or implicit link to a target within negative public statements is a necessary condition to categorize communication as hate speech. However, this negativity can be expressed in various ways, e.g., negative labels or slurs, references to traits or behaviors of targets. Although this feature-based approach cannot solve challenges regarding, for instance, humor and implicitness, it could offer a solution for the currently very wide range of operationalizations, which are often phrased quite broadly and therefore, may be too unspecific for coders. Not only can these features be used to create scales to quantify hate speech intensity, but they can also be brought into play to help identify the argumentative structures of hate speech. This could advance research in getting a better understanding of the hate speech phenomenon. This approach can also help practitioners in their everyday assessments of hate speech, as well as in tailoring counter-speech strategies and making comment sections a safer place.

Haut de page

Bibliographie

Assimakopoulos S., Baider F. H. & Millar S., 2017, Online Hate Speech in the European Union, Cham, Springer International Publishing. http://doi.org/10.1007/978-3-319-72604-5

Awan I., 2016, “Islamophobia on Social Media: A Qualitative Analysis of the Facebook’s Walls of Hate”, International Journal of Cyber Criminology, 10 (1), p. 1-20. http://doi.org/10.1007/978-3-319-72604-5.10.5281/zenodo.58517

Bahador B. & Kerchner D., 2019, “Monitoring Hate Speech in the US Media. https://mediapeaceproject.smpa.gwu.edu/report/

Berelson B. R., 1952, Content analysis in communication research, Michigan, Free Press.

Burnap P. & Williams M. L., 2015, “Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making”, Policy Internet, 7 (2), p. 223-242. http://doi.org/10.1002/poi3.85

Chaudhry I., 2015, “#Hashtagging hate: Using Twitter to track racism online”, First Monday, 20, (2). http://doi.org/10.5210/fm.v20i2.5450

Chen Y., Zhou Y. & Xu H., 2012, “Detecting Offensive Language in Social Media to Protect Adolescent Online Safety”, Proceedings of the 2012 ASE/IEEE International Conferences on Social Computing and on Privacy, Security, Risk and Trust (SOCIALCOM-PASSAT ‘12), p. 71-80.

Coe K. & Kenski K., Rains S. A., 2014, “Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments”, Journal of Communication, 64 (4), p. 658-679. http://doi.org/10.1111/jcom.12104

Council of Europe: Committee of Ministers, 1997, “Recommendation No. R (97) 20 of the Committee of Ministers to Member States on ‘Hate Speech’”. https://rm.coe.int/1680505d5b

Cowan G. & Khatchadourian D., 2003, “Empathy, Ways of Knowing, and Interdependence as Mediators of Gender Differences in Attitudes Toward Hate Speech and Freedom of Speech”, Psychology of Women Quarterly, 27 (4), p. 300-308. http://doi.org/10.1111/1471-6402.00110

Davidson T., Warmsley D., Macy M. & Weber I., 2017, “Automated hate speech detection and the problem of offensive language”, Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM 2017), p. 512-515.

Delgado R. & Stefancic J., 2004, Understanding words that wound, Boulder (COL), Westview Press.

Domingo D., 2011, “Managing audience participation. Practices, workflows and strategies” p. 76-95, in: Singer, J. B., Domingo, D. Heinonen, A., Hermida, A., Paulussen, S., Quandt, T., Reich, Z., Vujnovic, M. (eds), Participatory journalism: Guarding open gates at online newspapers, Chichester, Wiley-Blackwell.

ElSherief M., Kulkarni V., Nguyen D., Wang W. Y. & Belding E., 2018, “Hate Lingo: A Target-based Linguistic Analysis of Hate Speech in Social Media”, Proceedings of the 12th International AAAI Conference on Web and Social Media (ICWSM 2018), p. 42-51.

Erjavec K. & Poler Kovačič M., 2012, “You Don’t Understand, This is a New War!’: Analysis of Hate Speech in News Web Sites’ Comments”, Mass Communication and Society, 15 (6), p. 899-920. http://doi.org/10.1080/15205436.2011.619679

Faris R., Ashar A., Gasser U. & Joo D., 2016, “Understanding Harmful Speech Online”, Berkman Klein Center Research Publication: 2016-21. http://doi.org/10.2139/ssrn.2882824

Fischer A., Halperin E., Canetti D. & Jasini A., 2018, “Why we hate”, Emotion Review, 10 (4), p. 309-320. http://doi.org/10.1177/1754073917751229

Fiske S. T., 1998, “Stereotyping, prejudice, and discrimination”, p. 357-411, in: Gilbert, D. T., Fiske, S. T., Lindzey, G., eds., The Handbook of Social Psychology (2nd ed.). New York, McGraw-Hill.

Gagliardone I., Gal D., Alves T. & Martinez G., 2015, “Countering online hate speech. Unesco series on internet freedom”, United Nations Educational, Scientific and Cultural Organization. http://unesdoc.unesco.org/images/0023/002332/233231e.pdf

Gerstenfeld P. B., Grant D. R. & Chiang C.‑P., 2003, “Hate Online: A Content Analysis of Extremist Internet Sites”, Analyses of Social Issues and Public Policy, 3 (1), p. 29-44. http://doi.org/10.1111/j.1530-2415.2003.00013.x

Graumann C. F., 1998, “Verbal Discrimination: A Neglected Chapter in the Social Psychology of Aggression”. Journal for the Theory of Social Behaviour, 28 (1), p. 41-61. DOI: 10.1111/1468-5914.00062

Graumann C. F. & Wintermantel M., 1989, “Discriminatory Speech Acts: A Functional Approach”, p. 183-204, in: Bar-Tal, D., Graumann, C. F., Kruglanski, A. W., Stroebe, W., eds., Stereotyping and prejudice: Changing conceptions, New York, Springer-Verlag. http://doi.org/10.1007/978-1-4612-3582-8_9

Hanzelka J. & Schmidt I., 2017, “Dynamics of Cyber Hate in Social Media: A Comparative Analysis of anti-Muslim Movements in the Czech Republic and Germany”, International Journal of Cyber Criminology, 11 (1), p. 143-160. http://doi.org/10.5281/zenodo.495778

Harlow S., 2015, “Story-Chatterers Stirring up Hate: Racist Discourse in Reader Comments on U.S. Newspaper Websites”, Howard Journal of Communications, 26 (1), p. 21-42. http://doi.org/10.1080/10646175.2014.984795

Haslam N. & Loughnan S., 2014, “Dehumanization and infrahumanization”, Annual Review of Psychology, 65, p. 399-423. http://doi.org/10.1146/annurev-psych-010213-115045

Herbst S., 2010, Rude democracy. Civility and Incivility in American Politics, Philadelphia (PA), Temple University Press.

Jacobs J. B. & Potter K., 1998, Hate crimes: Criminal Law and Identity Politics, New York, Oxford University Press.

Kahn R. A., 2004, Holocaust denial and the law: Dilemmas of denial in Canada, France, Germany and the United States. New York/Basingstoke, Palgrave Macmillan.

Krippendorff K., 2018, Content analysis: An introduction to its methodology (4th ed.), Thousand Oaks (CA), Sage Publications.

Ksiazek T. B. & Springer N., 2018, “User comments in digital journalism: Current research and future directions”, p. 475-486, in: Eldridge, S. A., B. Franklin, B., eds., The Routledge Handbook of Developments in Digital Journalism Studies, London, Routledge.

Meibauer J., 2013, “Hassrede - von der Sprache zur Politik”, p. 1-27, in: Meibauer, J., ed., Linguistische Untersuchungen: Vol. 6. Hassrede: Interdisziplinäre Beiträge zu einer aktuellen Diskussion (2nd ed.), Gießener Elektronische Bibliothek.

Moon J., Cho W. I. & Lee J., 2020, “BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection”, The 8th International Workshop on Natural Language Processing for Social Media (SocialNLP), p. 25-31.

Nobata C., Tetreault J., Thomas A., Mehdad Y. & Chang Y., 2016, “Abusive Language Detection in Online User Content”, Proceedings of the 25th International Conference on World Wide Web (WWW’16), 145-153. http://doi.org/10.1145/2872427.2883062

Nuessel F., 2008, “A Note on Ethnophaulisms and Hate Speech”, Names, 56 (1), p. 29-31.http://doi.org/10.1179/175622708X282929

Obermaier M., Hofbauer M. & Reinemann C., 2018, “Journalists as targets of hate speech. How German journalists perceive the consequences for themselves and how they cope with it”, Studies in Communication and Media, 7 (4), p. 499-524. http://doi.org/10.5771/2192-4007-2018-4-499

Risch J., Stoll A., Ziegele M. & Krestel R., 2019, “hpiDEDIS at GermEval 2019: Offensive Language Identification using a German BERT model”, Proceedings of the 15th Conference on Natural Language Processing (KONVENS 2019), p. 405-410.

Ross B., Rist M., Carbonell G., Cabrera B., Kurowsky N. & Wojatzki M., 2016, “Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis”, Proceedings of the 3rd Workshop on Natural Language Processing for Computer-Mediated Communication (NLP4CMC III), p. 6-9.

Scharkow M., 2017, “Content Analysis, Automatic”, p. 1-14, in: J. Matthes, C. S. Davis & R. F. Potter (eds), The International Encyclopedia of Communication Research Methods. Hoboken (NJ), John Wiley & Sons. http://doi.org/10.1002/9781118901731.iecrm0043

Silva L., Mondal M., Correa D., Benevenuto F. & Weber I., 2016, “Analyzing the Targets of Hate in Online Social Media”, Proceedings of the 10th International AAAI Conference on Web and Social Media (ICWSM 2016), p. 687-694.

Sood S., Antin J. & Churchill E., 2012, “Profanity use in online communities”. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12), p. 1481-1490.

Sponholz L., 2018, Hate Speech in den Massenmedien: Theoretische Grundlagen und empirische Umsetzung. Wiesbaden: Springer VS. http://doi.org/10.1007/978-3-658-15077-8

Strippel C., Paasch-Colberg S., Laugwitz L., Emmer M. & Trebbe J., 2020, “Modularized Hate Speech Annotation: A Human-Labeled Dataset of German User Comments on Flight and Migration”, 6th International Conference on Computational Social Science (IC2S2), Amherst St. Cambridge (MA), July 19. https://www.polsoz.fu-berlin.de/kommwiss/v/bmbf-nohate/publications/IC2S2_2020_Poster_Dataset.pdf

Su L. Y.‑F., Xenos M. A., Rose K. M., Wirz C., Scheufele D. A. & Brossard D., 2018, “Uncivil and personal? Comparing patterns of incivility in comments on the Facebook pages of news outlets”, New Media Society, 20 (10), p. 3678-3699. http://doi.org/ 10.1177/1461444818757205

Tajfel H., 1974, “Social identity and intergroup behaviour”, Social Science Information, 13 (2), p. 65-93. http://doi.org/10.1177/053901847401300204

van Aken B., Risch J., Krestel R. & Löser A., 2018, “Challenges for toxic comment classification: An in-depth error analysis”, Proceedings of the 2nd Workshop on Abusive Language Online (ALW@EMNLP), p. 33-42.

Walker S., 1994, Hate speech: The history of an American controversy, Lincoln (NB), University of Nebraska Press.

Whitten-Woodring J., Kleinberg M. S., Thawnghmung A. & Thitsar M. T., 2020, “Poison If You Don’t Know How to Use It: Facebook, Democracy, and Human Rights in Myanmar”, The International Journal of Press/Politics, 25 (3), p. 407-425. http://doi.org/10.1177/1940161220919666

Wojatzki M., Horsmann T., Gold D. & Zesch T., 2018, “Do Women Perceive Hate Differently: Examining the Relationship Between Hate Speech, Gender, and Agreement Judgments”, Proceedings of the 14th Conference on Natural Language Processing (KONVENS 2018), p. 110-120..

Haut de page

Notes

1 Access : https://www.polsoz.fu-berlin.de/en/kommwiss/v/bmbf-nohate/

2 Access : https://anr-dfg-mphasis.loria.fr/

3  Disclaimer: The examples do not reflect the views of the authors and exclusively serve to explain patterns of hate speech. Also, for a better understanding, the examples refer to the same kind of target group (targeting nationality/ethnicity). Of course, the mentioned features can also be adapted for other social groups.

Haut de page

Table des illustrations

Titre Figure 1: Frequency of negative evaluations addressed at individuals vs. groups (generalization, in %; N in brackets are absolute numbers of negative evaluations)
URL http://journals.openedition.org/questionsdecommunication/docannexe/image/24808/img-1.png
Fichier image/png, 14k
Haut de page

Pour citer cet article

Référence papier

Liane Reiners et Christian Schemer, « A Feature-Based Approach to Assess Hate Speech in User Comments »Questions de communication, 38 | 2020, 529-548.

Référence électronique

Liane Reiners et Christian Schemer, « A Feature-Based Approach to Assess Hate Speech in User Comments »Questions de communication [En ligne], 38 | 2020, mis en ligne le 30 mars 2023, consulté le 07 septembre 2024. URL : http://journals.openedition.org/questionsdecommunication/24808 ; DOI : https://doi.org/10.4000/questionsdecommunication.24808

Haut de page

Auteurs

Liane Reiners

Johannes Gutenberg-Universität Mainz, Institut für Publizistik, D-55099 Mainz, Allemagne

Christian Schemer

Johannes Gutenberg-Universität Mainz, Institut für Publizistik,D-55099 Mainz, Allemagne

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search