THE DARK SIDE OF LLM-POWERED CHATBOTS: MISINFORMATION, BIASES, CONTENT MODERATION CHALLENGES IN POLITICAL INFORMATION RETRIEVAL

Autores/as

  • Joanne Kuai Karlstad University
  • Cornelia Brantner
  • Michael Karlsson
  • Elizabeth Van Couvering
  • Salvatore Romano

DOI:

https://doi.org/10.5210/spir.v2024i0.13977

Palabras clave:

Algorithmic gatekeeping, comparative studies, algorithm auditing, generative information retrieval

Resumen

This study investigates the impact of Large Language Model (LLM)-based chatbots, specifically in the context of political information retrieval, using the 2024 Taiwan presidential election as a case study. With the rapid integration of LLMs into search engines like Google and Microsoft Bing, concerns about information quality, algorithmic gatekeeping, biases, and content moderation emerged. This research aims to (1) assess the alignment of AI chatbot responses with factual political information, (2) examine the adherence of chatbots to algorithmic norms and impartiality ideals, (3) investigate the factuality and transparency of chatbot-sourced synopses, and (4) explore the universality of chatbot gatekeeping across different languages within the same geopolitical context. Adopting a case study methodology and prompting method, the study analyzes responses from Microsoft’s LLM-powered search engine chatbot, Copilot, in five languages (English, Traditional Chinese, Simple Chinese, German, Swedish). The findings reveal significant discrepancies in content accuracy, source citation, and response behavior across languages. Notably, Copilot demonstrated a higher rate of factual errors in Traditional Chinese while exhibiting better performance in Simplified Chinese. The study also highlights problematic referencing behaviors and a tendency to prioritize certain types of sources, such as Wikipedia, over legitimate news outlets. These results underscore the need for enhanced transparency, thoughtful design, and vigilant content moderation in AI technologies, especially during politically sensitive events. Addressing these issues is crucial for ensuring high-quality information delivery and maintaining algorithmic accountability in the evolving landscape of AI-driven communication platforms.

Descargas

Publicado

2025-01-02

Cómo citar

Kuai, . J., Brantner, C., Karlsson, M., Van Couvering, E., & Romano, S. (2025). THE DARK SIDE OF LLM-POWERED CHATBOTS: MISINFORMATION, BIASES, CONTENT MODERATION CHALLENGES IN POLITICAL INFORMATION RETRIEVAL. AoIR Selected Papers of Internet Research. https://doi.org/10.5210/spir.v2024i0.13977

Número

Sección

Papers K