@article{Siapera_Bastos_Curley_Khan_Cunnigham_Scally_Tuters_Walker_Suarez-Tangil_Bispham_et al._2023, title={THE TOXIC TURN? CONCEPTUAL AND METHODOLOGICAL ADVANCES ON PROBLEMATIC CONTENTS ON SOCIAL MEDIA}, volume={2022}, url={https://spir.aoir.org/ojs/index.php/spir/article/view/12967}, DOI={10.5210/spir.v2022i0.12967}, abstractNote={The ‘toxic turn’ in social media platforms continues unabated. Hate speech, mis- and disinformation, misogynistic and racist speech, images, memes and videos are all far too common on social media platforms and more broadly on the internet. While the diminishing popularity of populist politicians led to hopes for less social toxicity, the Covid-19 pandemic introduced new and more complex dimensions. Tensions have emerged around what constitutes problematic content and who gets to define it. Co-regulation models, such as for example the EC Code of Conduct against Illegal Hate Speech, focus on the legality of certain types of contents, while leaving other categories of problematic contents to be defined by platforms. In parallel, the social media ecosystem became more diverse, as new platforms with hands off moderation policies attracted users who felt too constrained by the policies of mainstream platforms. The proposed panel examines this complex and dynamic landscape by problematizing what is understood as toxic, deplatformed, removable and in general problematic content on platforms with the aim to probe the boundaries of what is constituted as acceptable discourse on platforms and to map its implications. In particular, this panel discusses the broad definition of ‘problematic content’ employed by social media platforms, a catch-all term that cuts across hate speech and propaganda, including more politically topical content such as mal-, mis-, and disinformation, hyperpartisan and polarising content, but also abusive, misogynistic, racist, and homophobic discourse. The term is also employed to refer to spam and content that infringes upon the Terms of Service or the Community Standards of social media platforms. As such, it is a broad category that resists a narrower classification given the operational scope of its use. Defining what constitutes problematic content is a key operation of platform content moderation policies but is also the subject of intense debates (de Gregorio, 2020; Gillespie, 2018; Gillespie et al., 2020; Gorwa et al., 2020). The panel interrogates the many definitions and applications of problematic content on social media platforms and applications through an empirically informed lens and focusing on deleted contents, complex mixed narratives, and grey areas, including hidden misinformation on voice applications. Problematic Content according to Twitter Compliance API presents ongoing work on the Twitter Compliance API and the Compliance Firehose, which allow researchers to identify content that has been deleted, deactivated, protected, or suspended from Twitter, a proxy for problematic content. In Multi-Part Narratives on Telegram Siapera presents ongoing research that probes the intersection between Covid-19 scepticism, far right and other political narratives in vaccine hesitant groups on Telegram. The third contribution, What if Bill Gates really is evil, people? Investigating the infodemic’s grey areas discusses the conceptual and methodological definitions of problematic content in relation to work on anti-vax and other conspiratorial narratives on Instagram and on Twitter. The fourth contribution, Misinformation and other Harmful Content in Third-Party Voice Applications focuses on problematic content that is yet to be identified on voice applications such as personal assistants. The article addresses the methodological challenges of identifying and defining such contents on applications that have currently no content moderation policies. All contributions foreground the difficulties and costs of identifying and dealing with problematic contents on social media. The panel fits with theme of decolonization in two ways: firstly, because it is concerned with the tensions around how toxic/problematic contents are defined and who gets to define them; and secondly, because of its focus on neo-colonial discourses or justifications for colonialism in both narratives hosted by platforms and in platforms’ attempts to regulate content. As some narratives are flagged for removal by social platforms, they also raise the question of who is deciding and what does problematic content mean, with far right discourses exploiting this tension and ironically denouncing any attempt to regulate the public discourse as ideological enforcement and justification for (neo)colonial practices performed by social media platforms. From this perspective, platforms’ own claims about what constitutes acceptable content is uncomfortably close to colonial narratives of civilised discourse and brings to the fore the potential for neo-colonial narratives and practices in digital spaces. References De Gregorio, G. (2020). Democratising online content moderation: A constitutional framework. Computer Law & Security Review, 36, 105374. Gillespie, T. (2018). Custodians of the Internet. Yale University Press. Gillespie, T., Aufderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., Matamoros-Fernández, A., ... & West, S. M. (2020). Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4), Article-number. Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 2053951719897945.}, journal={AoIR Selected Papers of Internet Research}, author={Siapera, Eugenia and Bastos, Marco and Curley, Cliona and Khan, Mansura and Cunnigham, Padraig and Scally, Brendan and Tuters, Marc and Walker, Shawn and Suarez-Tangil, Guillermo and Bispham, Mary and et al.}, year={2023}, month={Mar.} }