CYBERHATE ANONYMITY AND THE RISK OF BEING EXPOSED
In this paper, we predict hateful content as well as quantify the causal link between anonymity and hateful content in political discussions online. First, we make use of a supervised machine-learning model to find a prediction model of cyberhate in political discussions on a dominating Swedish Internet forum, Flashback. Second, we investigate how changes in anonymity affect the writing of hateful content. We scrape text from the political discussions on Flashback and let a research assistant manually classify each post from a random subset of the threads by whether it contained, e.g. hateful writings, aggressive writings as well as towards whom the hate is directed. We use the classified data to find a prediction model in the full set of threads. We then use the predictions of hate to estimate the effect of changes in anonymity on cyberhate. An event suddenly changed the anonymity at the discussion forum. The event affected only a certain type of user, creating a quasi-experiment, with early-registered users as a treatment group and late-registered users as a control group. We find a prediction model of hateful content. Using these predictions in the quasi-experimental estimation, we find that early users of the forum decreased their share of hateful content more than later registered users did after the event when there was a threat of less anonymity. We also show that this behavioural change is a combination of individuals’ changing how they express themselves and that they reduce their writing or stop entirely.