NON-HUMAN HUMANITARIANISM: WHEN AI FOR GOOD TURNS OUT TO BE BAD
DOI:
https://doi.org/10.5210/spir.v2020i0.11267Palabras clave:
Artificial intelligence, algorithms, automation, humanitarianism, 'AI for social good'Resumen
Artificial intelligence (AI) applications such as predictive analytics, forecasting and chatbots are increasingly proposed as solutions to the complex challenges of humanitarian emergencies. This is part of the broad trend of ‘AI for social good’ as well as the wider developments in ‘digital humanitarianism’. The paper develops an interdisciplinary framework that brings together colonial and decolonial theory, the critical inquiry of humanitarianism and development, critical algorithm studies as well as a sociotechnical understanding of AI. Drawing on a review of current humanitarian AI applications as well as interviews with stakeholders, our analysis suggests that several initiatives fail their own objectives. However, this should not mean that these innovations do not have powerful consequences. Automation reproduces human biases whilst removing human judgement from situations, potentially further marginalizing disadvantaged populations. We observe a transformation of humanitarian work as technology separates officers from the consequences of their actions. At the same time, AI initiatives are cloaked in a discourse of inherent progress and an aura of ‘magic’. Rather than democratizing the relationships between humanitarian providers and suffering subjects, digital technology reaffirms the power asymmetries associated with traditional humanitarianism. The non-human aspects of AI humanitarianism reveal, rework and amplify existing deficiencies of humanitarianism. The hype generated by humanitarian innovation appears to have more direct benefits for commercial stakeholders, rather than affected populations. Ultimately, by turning complex political problems like displacement and hunger into problems with technical solutions, AI depoliticizes humanitarian emergencies.