MODERATING MENTAL HEALTH: ARE AUTOMATED SYSTEMS TOO RISK AVERSE?
Keywords:Mental health, content moderation, harm, risk, resilience, algorithm
AbstractAcross commercial social media platforms and dedicated support forums alike, mental health content raises important questions about what constitutes risk and harm online, and how automated and human moderation practices can be re-configured to accommodate resilient behaviours and social support. In work with three Australian mental health organisations that provide successful discussion and support forums, this paper identifies moderation practices that can help to re-think how mental health content is managed. The work aims to improve safety and resilience in these spaces, drawing insights from successful practices to inform algorithmic and moderator treatment of mental health content more widely across social media. Through an analysis of interviews and workshops with forum managers and moderators, I paper argue that platforms must incorporate strengths-based context (resilience indicators) into their moderation systems and practices, challenging simplistic assessments of mental health content as risk and harm.
How to Cite
McCosker, A. (2023). MODERATING MENTAL HEALTH: ARE AUTOMATED SYSTEMS TOO RISK AVERSE?. AoIR Selected Papers of Internet Research, 2022. https://doi.org/10.5210/spir.v2022i0.13051