ASSESSING ETHICAL AI-BASED DECISION-MAKING: TOWARDS AN APPLIED ANALYTICAL FRAMEWORK
Globally there is strong enthusiasm for using Artificial Intelligence (AI) in government decision making, yet this technocratic approach is not without significant downsides including bias, exacerbating discrimination and inequalities, and reducing government accountability and transparency. A flurry of analytical and policy work has recently sought to identify principles, policies, regulations and institutions for enacting ethical AI. Yet, what is lacking is a practical framework and means by which AI can be assessed as un/ethical. This paper provides an overview of an applied analytical framework for assessing the ethics of AI. It notes that AI (or algorithmic) decision-making is an outcome of data, code, context and use. Using these four categories, the paper articulates key questions necessary to determine the potential ethical challenges of using an AI/algorithm in decision making, and provides the basis for their articulation within a practical toolkit that can be demonstrated against known AI decision-making tools.