MODERATING MENTAL HEALTH: ARE AUTOMATED SYSTEMS TOO RISK AVERSE?

Authors

  • Anthony McCosker Swinburne University of Technology

DOI:

https://doi.org/10.5210/spir.v2022i0.13051

Keywords:

Mental health, content moderation, harm, risk, resilience, algorithm

Abstract

Across commercial social media platforms and dedicated support forums alike, mental health content raises important questions about what constitutes risk and harm online, and how automated and human moderation practices can be re-configured to accommodate resilient behaviours and social support. In work with three Australian mental health organisations that provide successful discussion and support forums, this paper identifies moderation practices that can help to re-think how mental health content is managed. The work aims to improve safety and resilience in these spaces, drawing insights from successful practices to inform algorithmic and moderator treatment of mental health content more widely across social media. Through an analysis of interviews and workshops with forum managers and moderators, I paper argue that platforms must incorporate strengths-based context (resilience indicators) into their moderation systems and practices, challenging simplistic assessments of mental health content as risk and harm.

Downloads

Published

2023-03-30

How to Cite

McCosker, A. (2023). MODERATING MENTAL HEALTH: ARE AUTOMATED SYSTEMS TOO RISK AVERSE?. AoIR Selected Papers of Internet Research, 2022. https://doi.org/10.5210/spir.v2022i0.13051

Issue

Section

Papers M