THE MORALISATION OF PREDICTIVITY IN THE AGE OF DATA-DRIVEN SURVEILLANCE
This paper argues that emerging technologies of datafication are intensifying a moralisation of predictivity. On one hand, this describes the growing pressure to quantify and predict every kind of social problem. Reluctance to adopt emerging technologies of surveillance is construed as abdication of a moral responsibility via negligence to inevitable progress. On the other hand, it describes the corresponding demand that human subjects learn to live in more predictable and machine-readable ways, adapting to the flaws and ambiguities of imperfect technosystems. This argument echoes that of Joseph Weizenbaum (1976), a pioneer of early AI research and the inventor of the ELIZA chatbot: that well in advance of machines fully made in our image, it is the human subjects that are asked to render themselves more compatible and legible to those machines. Drawing from a book-length research project into the public presentation of surveillance technologies, I show how messy data, arbitrary classifications, and other uncertainties become fabricated into the status of reliable predictions. Specifically, the bulk of the presentation will examine the rapid expansion of counter-terrorist surveillance systems in 2010’s America. All in all, the moralisation of predictivity helps suture the many imperfections of data-driven surveillance, and provide justificatory cover for their breakneck expansion across the boundaries of public and private. They perpetuate the normative expectation that what can be predicted must be, and what needs to be predicted surely can be. In the process, spaces for human discretion, informal norms, and sensitivity to human circumstance are being squeezed out.