A NEW BLACK BOX METHODOLOGY: THE CHALLENGES AND OPPORTUNITIES OF INTERROGATING THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM
The black box around online platforms’ internal governance practices makes it difficult for users to trust that their expression is moderated in ways that are free from arbitrariness and bias. This paper proposes a black box methodology for examining content moderation in practice when only parts of a platform’s regulatory system are visible from the outside. The proposed methodology, which uses content analysis and innovative digital methods to investigate how discrete inputs (i.e. images) produce certain outputs (i.e. whether an image is removed or not removed), is explained through a topical case study into whether like images of Underweight, Mid-Range and Overweight women’s bodies are moderated alike on Instagram. Overall, results show a trend of inconsistent moderation: specifically, up to 22% of 4,994 coded images were removed by Instagram or by the user and are therefore potentially false positives. Moreover, the odds of removal for Underweight, Mid-Range and Overweight images differ. These results suggest that concerns around the risk of arbitrariness and bias on Instagram, and, indeed, ongoing distrust of the platform among users, might not be unfounded. In outlining the proposed methodology, this paper evaluates the methodological, legal and ethical challenges to studying Instagram, many of which are due to the significant lack of transparency around platform governance more broadly. By evaluating these challenges, we can better assess the efficacy of using black box analytics and digital methods to examine important questions around content moderation at scale.