MAPPING DISCORD’S DARKSIDE: DISTRIBUTED HATE NETWORKS ON DISBOARD
Keywords:Discord, Disboard, Search, Third-party, Moderation
Scholars and journalists have noted that Discord, a social application oriented around voice/video chat communities and popular amongst gamers, has a history of harboring white supremacist and toxic groups. Discord has recently undertaken a public rebranding to distance itself from white supremacist, alt-right, and hateful content through a commitment to proactive moderation (Brown, 2020). However, Discord relies extensively on third-party services (like bots and server bulletins), and current scholarship has not adequately accounted for the role of such third-party actors in facilitating hateful and white supremacist networks on private platforms like Discord. This study notes how Discord’s model for curating only popular servers offloads the ethical burden of searchability to server bulletin sites like Disboard, to deleterious effect. This study involves two parts: 1) we use critical technoculture discourse analysis to examine Discord’s blogs, moderation policies, and API (Brock, 2018) and 2) we present data scraped from publicly-available descriptions and tags of 3,600 Discord servers listed on Disboard. Our study finds that thousands of servers on Disboard use overtly white supremacist and hateful tags, often advertising their ‘edgy’ communities as racist, raiding-oriented, and deliberately toxic. These servers exploit Discord’s moderation tools and Disboard’s networked affordances to proliferate within Discord’s distributed ecology. Ultimately, we argue that Discord’s response to hate, as a platform, does not address its reliance on unmoderated third-party services or the networked practices of its toxic communities.