EXAMINING THE EFFECTIVENESS OF ARTIFICIAL INTELLIGENCE-BASED CYBERBULLYING MODERATION ON ONLINE PLATFORMS: TRANSPARENCY IMPLICATIONS
Keywords:Cyberbullying, Transparency, Artificial Intelligence, Social Media, Gaming Platforms
AbstractCyberbullying remains a significant problem for children that appears to have been exacerbated by Covid-19 related lockdowns, which moved a lot of children's offline activities online. Transparency reports shared by social network and gaming platform companies indicate increased take-downs of offensive and harmful comments, posts or content by artificially intelligent (AI) tools. Nonetheless, little is known about how such tools are designed and developed, what data they are trained on; and how this is done in practice. Many studies have discussed the opacity of such algorithmic _moderation, detection and prevention_ solutions, and called for their greater transparency to understand *how* and *what* user-interactive engagement features help in AI decision-making. This study examines a) the use of AI-solutions by social media and gaming companies to proactively address cyberbullying on their platforms, b) explore the current cyberbullying detection, prevention and proactive intervention strategies by such companies, and c) through comprehensive database search, review of existing computational literature on monitoring, detection and intervention strategies to address cyberbullying incidents amongst children. Our findings show that *very scarce* resources are available in the public domain to build AI algorithmic solutions to combat cyberbullying, and _little information_ is publicly available that would allow scrutiny of platforms' enforcement mechanisms.
How to Cite
Verma, K., Davis, B., & Milosevic, T. (2023). EXAMINING THE EFFECTIVENESS OF ARTIFICIAL INTELLIGENCE-BASED CYBERBULLYING MODERATION ON ONLINE PLATFORMS: TRANSPARENCY IMPLICATIONS. AoIR Selected Papers of Internet Research, 2022. https://doi.org/10.5210/spir.v2022i0.13100