BOT-BASED COLLECTIVE BLOCKLISTS IN TWITTER: THE COUNTERPUBLIC MODERATION OF A PRIVATELY-OWNED NETWORKED PUBLIC SPACE
In Twitter, many people are facing increasing harassment and abuse from others, particularly from individuals associated with "GamerGate"– a self- described ‘social movement’ viciously opposing feminist video game developers and media critics. While Twitter supports user-to-user ‘blocking’ (anyone can direct Twitter to hide posts or messages from a particular account), targets of GamerGate-associated harassment often describe individually blocking harassers as a Sisyphean task. In response, some are using collective blocklists, in which a group curates a list of accounts which they have identified as harassers. Any account added to a blocklist is automatically made invisible for all of the blocklist's subscribers. Most notably, this feature is not built into the Twitter platform and was not designed, developed, or officially supported by Twitter, Inc. Instead, collective blocklists are made possible through automated software agents (or bots) which are developed and operated by independent groups of volunteers.
This paper reports findings from an ethnography of infrastructure, investigating the development and deployment of these bot-based collective blocklists (or blockbots) in Twitter. I show how the designs (and re-designs) of blockbots are bound up in competing ideas and imaginaries about what it means for counterpublic groups to moderate a privately-owned networked public space. Blockbots are a mode of algorithmic filtering which reconfigures affordances of a networked public, but with key differences from the algorithmic filters like in Facebook’s News Feed. Blockbots make responding to harassment a more visible and communal practice, as well as involve imagining alternative policies and procedures for moderating content online.