DATA SELVES: TRUST, CONTROL AND SELF-REPRESENTATION IN DIGITAL SOCIETY

Autores/as

  • Brady Robards Monash
  • Benjamin Lyall Monash
  • Claire Moran Monash
  • Jean Burgess Queensland University of Technology
  • Kath Albury Swinburne
  • Rowan Wilken RMIT
  • Anthony McCosker Swinburne
  • Terri Senft Macquarie

DOI:

https://doi.org/10.5210/spir.v2019i0.10944

Palabras clave:

selfies, quantified data, representation, data markets

Resumen

A considerable amount of personal data is now collected on and by individuals: footsteps on Fitbits, screen time in Apple’s iOS, conversations on dating apps, sleeping patterns in baby tracker apps, and viewing habits on Netflix and YouTube. What value do these data have, for individuals but also for corporations, governments, and researchers? When these data are provided back to users, how do people make sense of it? What ‘truth claims’ do quantified personal data make? How do we navigate anxieties around datafied selves, and in what ways are bodies rendered visible or invisible through processes of datafication in digital society?

In this panel we explore these questions through four papers centered on the notion of the “data-selfie.” Data-selfies take different forms, including but not limited to:
- Visuals that reference the “status” or “progress” of a user’s physical body, as in 3-D scans, or charts generated by self-monitoring apps for health and fitness.
- Visuals that reference the remapping of photographic self-expression to biometric, corporate and state surveillance, such as airport facial recognition check points that ask flyers to pose for a selfie, or sex offender databases that now contain images first posted to hook up apps by consenting teenagers.
- Representations of the embodied or commoditized self, produced not as stand-alone expression, but as conversational prompts that encourage qualitative, “story-driven” data, in the interests of pedagogy, therapy, activism, etc.
- Profiles that reference users as “targets” whose chief value is the metadata they generate. Using proprietary algorithms, platforms mine this metadata—which can include information about a users’ device, physical location, and their activities online—categorizing it for internal use, and selling it to third parties interested in influencing the consumer, social and/or political preferences of the “targets” in question.

In Paper 1, Authors 1, 2, and 3 develop a new conceptualisation for understanding how individuals reveal themselves through their own quantified personal data. They call this the ‘confessional data selfie’. Drawing on a sample of 59 examples from the top posts in subreddit r/DataIsBeautiful, they argue that the confessional data selfie represents an aspect of one’s self, through visualisations of personal data, inviting analysis, eliciting responses and personal story-telling, and opening one’s life up to others.

In Paper 2, Authors 4, 5, and 6 take a political economy of communication approach to analyse the data markets of dating apps. They consider three cases: Grindr, Match Group (parent company of Tinder), and Bumble. Drawing on trade press reportage, financial reports, and other materials associated with the apps and publishers in question, they point to the increased global concentration in ownership of dating app services and raise questions about the ways in which dating apps are now in the ‘data business’, using personal data to profile users and monetise private interactions.

In Paper 3, Author 7 reports on experiences of ‘data anxiety’ among older people in Australia. Author 7 draws on data literacy workshops, home-based interviews and focus groups with older internet users, that led to discussions of control over personal data, control over social interactions, and the resulting implications for exposure, openness, and visibility. Also key to this study was the taking and sharing of selfies in a closed Facebook group, serving as the starting point for reflections on these various experiences of control. Many of these older participants questioned whether or not ongoing participation in social media and broader data structures were ‘worthwhile’. This raises broader questions about the extent to which users are willing to sacrifice control over personal data - or the feeling of control - in order to participate and be visible.

Finally, in Paper 4, Author 8 asks: when is the face data? Moving from examples of ‘deepfake’ video exhibitions to Google Art as a repository of ‘face-data’ as cultural and social capital, Author 8 goes on to examine how notions of face-as-data apply to individuals living with the neurological condition of autism. Can facial recognition apps help people with autism to read and decode human expressions?

Taken together, these four papers each engage with questions about the relationship between personal data and broader structures of power and representation: from corporations like Grindr and Tinder using dating app data to profile users, to Google using uploaded selfies to train facial recognition algorithms; through to re-purposing and narrativising personal data as part of practices of self-representation; and the feelings of anxiety, unease or creepiness that accompany the increased datafication of personal identity. Self-representation is also a key recurrent thread in these papers, from confessional data selfies as acts of revelation through personal quantified data, through to the photographic selfie as a research exercise that prompts discussions of control and data privacy.

Descargas

Publicado

2019-10-31

Cómo citar

Robards , B., Lyall, B., Moran, C., Burgess, J., Albury, K., Wilken, R., … Senft, T. (2019). DATA SELVES: TRUST, CONTROL AND SELF-REPRESENTATION IN DIGITAL SOCIETY. AoIR Selected Papers of Internet Research, 2019. https://doi.org/10.5210/spir.v2019i0.10944

Número

Sección

Panels