The communicative features of online hate in temporary social networks in Twitter and YouTube

In recent years, communicating with others online has grown exponentially and social networking sites such as Twitter, YouTube and Facebook have now become popular forms of communication, especially among the youth. In social networking, communication mostly occurs within the public domain. Mark Zuckerberg, founder of Facebook, stated that ‘privacy was no longer a social norm’ (Johnson, 2010). If you sign up to any social networking site, it is expected of you to share information within the public domain. Why else would you have a Facebook account? Zuckerberg goes on to state that ‘people have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people...’ (The Guardian, 2010). One of the intriguing characteristics of social networking is anonymity; the ability to take on a different identity or hiding behind a screen name. Social media and anonymity allows individuals to hide their identity and post comments of misogynistic, homophobic, racial, etc. content. Online hate has gone viral (Foxman, 2013). The rapid spread of online intolerance has led many social platforms to update their usage policies in a bid to combat further spread of this scourge; some more proactive than others. According to Foxman (2013) online hate is seen as the ‘inevitable effluent of Internet freedom, rationalizing it as a problem ‘too big to address’, as thousands of comments are posted on the various platforms on a daily basis. It can be said that there is a blurred line when it comes to freedom of speech and online hate. Many studies in the field of computer-mediated communication and online hate focus on the effects (behavioural and psychological) it has on the victims, the motivation for the hate, the profiles of both the victims and the perpetrators, and so on. Few studies, however, look at the way in which language is used to convey online hate; what are the specific features used and how each platform is used. This study is important as it aims to identify the ways in which online hate manifests in online communication. Acknowledging the fact that no two platforms are the same, the study compares the threads around particular topics on Twitter and YouTube; paying close attention to the interaction among members who find themselves in these temporary networks. According to Hardaker (2010: 216) there are only a few The communicative features of online hate in temporary social networks in Twitter and YouTube


BACKGROUND
In recent years, communicating with others online has grown exponentially and social networking sites such as Twitter, YouTube and Facebook have now become popular forms of communication, especially among the youth. In social networking, communication mostly occurs within the public domain. Mark Zuckerberg, founder of Facebook, stated that 'privacy was no longer a social norm' (Johnson, 2010). If you sign up to any social networking site, it is expected of you to share information within the public domain. Why else would you have a Facebook account? Zuckerberg goes on to state that 'people have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people…' (The Guardian, 2010).
One of the intriguing characteristics of social networking is anonymity; the ability to take on a different identity or hiding behind a screen name. Social media and anonymity allows individuals to hide their identity and post comments of misogynistic, homophobic, racial, etc. content. Online hate has gone viral (Foxman, 2013). The rapid spread of online intolerance has led many social platforms to update their usage policies in a bid to combat further spread of this scourge; some more proactive than others. According to Foxman (2013) online hate is seen as the 'inevitable effluent of Internet freedom, rationalizing it as a problem 'too big to address', as thousands of comments are posted on the various platforms on a daily basis. It can be said that there is a blurred line when it comes to freedom of speech and online hate.
Many studies in the field of computer-mediated communication and online hate focus on the effects (behavioural and psychological) it has on the victims, the motivation for the hate, the profiles of both the victims and the perpetrators, and so on. Few studies, however, look at the way in which language is used to convey online hate; what are the specific features used and how each platform is used. This study is important as it aims to identify the ways in which online hate manifests in online communication. Acknowledging the fact that no two platforms are the same, the study compares the threads around particular topics on Twitter and YouTube; paying close attention to the interaction among members who find themselves in these temporary networks. According to Hardaker (2010: 216) there are only a few

Kirby America
University of the Western Cape studies on 'linguistic aggression online'. She goes on to state that this could be a result of CMC being viewed as 'frivolous, insignificant or marginal' (cf. Herring and Nix 1997;Merchant 2001;Cho et al. 2005). The findings of this study will contribute to the growing body of work in the fields of sociology and psychology as well as provide a new perspective to the field of sociolinguistics.
The purpose of this study is threefold. Firstly, identify the communicative features of computer-mediated communication, specifically looking at the unique characteristics of Twitter and YouTube. Secondly, looking at how group behaviour is influenced by other members within the temporary networks; are posts or comments influenced by others. These temporary networks are formed around particular topics. When topics are no longer relevant, members no longer communicate with one another. This results in the disintegration of the network. Finally, determine the relationship between online hate and freedom of speech in the two platforms and their policies.
The study aims to answer the following questions:

METHODS
This study makes use of a mixed methods approach in the collection and analysis of data; that is qualitative and quantitative research methods. The most crucial aspect of this study's inclusion criterion is that of location. Users should reside in South Africa and should be active users on Twitter and YouTube. At the outset of this study, the number of users has not been determined. The reason for this has to do with the way data is collected. That is, everyone who posts throughout the 7 days and all comments for each of the 8 YouTube videos would be captured -resulting in a large corpus of posts. A location filter will be used on each platform to restrict posts collected. This method of sampling and data collection means that there would be no face-to-face or direct interaction with the participants.

Participants
Collecting data from males and females allows for comparisons to be made in terms of online language use, frequency of postings, who took part in any form of online hate (intentionally or unintentionally) and so on. This would form part of the quantitative data of this study. The decision to do away with a set age for this study is three-fold. Firstly, it restricts the amount of data. Secondly, although not common, individuals as young as 13 are now signing up to SNSs 1 . Lastly, the ages of users are not readily available in their bios or evident in their usernames (as discovered in Beevolve's study, 2012 2 ).

Data Collection
For YouTube, all comments for 8 video uploads will be collected. The reason for collecting all comments per video is that no data would be lost as users comment on each other's posts.
Selecting particular posts would result in these posts being taken out of context in the analysis of the data. Two applications will be used to select videos for YouTube. The first application for collecting YouTube commentary is YouTube videos near me. It integrates the Google Maps application to find videos posted in your location. The second application used to collect YouTube comments is YouTube itself. The column to the left of the main page has the option to view popular videos. Once clicked, if your location is set to South Africa, YouTube will display video that are popular in South Africa (#PopularOnYoutubeSouthAfrica). Data will be captured manually from Twitter every second day, collecting 7 days of posts. Each data collection session will last approximately 30 minutes. Once YouTube videos has been selected and comments have been collected, Twitter data will further be filtered by searching for posts similar to the content found in the YouTube comments; for comparison in the data analysis stage.

Data Analysis
Three methods of analysis are used, that is content analysis, computer-mediated discourse analysis (CMDA) and social network analysis (SNA). Content analysis is used to identify codes and categories for the data collected. Final categories are then compared across the two platforms. Computer-mediated discourse analysis looks at the four domains of language, namely: structure, meaning, interaction, and social behaviour (Herring, 2004: 18 -19). CMDA also includes the analysis methods of: frequency counts (message count and length and the rate of response -'core participants'), structural analysis (abbreviations, word choice, language routines -'in-group language') and, pragmatic analysis (speech acts of positive politeness -'support', and so on).
The purpose of social network analysis is to understand social interaction among members of a particular network by looking at the content (topic of discussion), the frequency of interaction and the strength of ties. Serrat (2009) states that, in SNA, the data collected is mapped in a graph, this 'data gathering and analysis process provides baseline information against which one can then prioritize and plan interventions to improve knowledge flows, which may entail recasting social connections' (Serrat, 2009:3). The application used for mapping out the data in SNA is NodeXL (an extension of Microsoft Excel).

Ethical Considerations
Even though the data needed for this study appears in the public domains of Twitter and YouTube and do not require the consent of the users, it is still important to consider ethics. A key aspect of ethics is the concept of 'do not harm' (Druckman, 2005:160). To protect the identities and SNS accounts, users will remain anonymous and pseudonyms will be used in place of real names and screen names. The only characteristics that will be revealed are age (if found), gender and location as these are essential to the methodology of this study.

THEORETICAL FRAMEWORK
The study is informed by the theories of online disinhibition and social contagion. Online disinhibition argues that, in CMC, people do and say what they want; things they normally wouldn't do in face-to-face communication. In CMC, there is a sense of freedom -the loss of inhibition (disinhibition). There are two types of disinhibition, namely: positive (sharing fears, personal information, emotions, wishes, etc.) and negative (rude language, criticism, ager, hatred, threats, etc.). The negative behaviour is commonly known as toxic disinhibition (Suler, 2004). Social contagion describes social interaction as being contagious; where interaction (responses) are influenced by comments posted by other members of the network -a hypnotic influence. It explains that users often feel protected by anonymity, further fuelling instances of freedom of speech and hate. Social contagion theory also refers to a collective behaviour, where a comment can influence or set the tone for the thread.