SAFE FROM “HARM”: THE GOVERNANCE OF VIOLENCE BY PLATFORMS
Keywords:harm, platform governance, discourse analysis, violence, moderation
Platforms have long been under fire for how they create and enforce policies around hate speech, harmful content, and violence. In this study, we examine how three major platforms (Facebook, Twitter, and YouTube) conceptualize and implement policies around how they moderate “harm,” “violence,” and “danger” on their platforms. Through a feminist discourse analysis of public facing policy documents from official blogs and help pages, we found that platforms are often narrowly defining harm and violence in ways that perpetuate ideological hegemony around what violence is, how it manifests, and who it affects. Through this governance, they continue to control normative notions of harm and violence, denying their culpability, and effectively manage perceptions of their actions and directing users’ understanding of what is “harmful” versus what is not. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm.