Data have come to play an integral role in the areas of consumption, production, politics and security. However, with vast swathes of data being collected, the question becomes how to make sense of it, how to use it decisively. Data-driven automation circumvents the supposed failings of human decision-making, which is thought to introduce frictions and resistances that slow social and economic processes. As Mark Andrejevic writes in his book Automating Media (2020, p.2), automation eliminates ‘the moment of uncertainty, unpredictability, inconsistency, or resistance posed by the figure of the [human] subject’. Digital automation, on the other hand, promises to transcend the ‘internal tensions that accompany the divisions in the subject: between conscious and unconscious, individual and collective, culture and nature’ (Andrejevic, 2020, p.4).
However, the drive to automation has as its centrepiece the omission of human judgment and decision-making. Information is no longer presented in a way that is fit for human interpretation, but instead as strings of indecipherable numbers designed to be ‘read’ by computers. This ‘post-representational’ logic underpins the design of automated media now used in a wide array of contexts, including surveillance and security systems, social media platforms and smart cities. There is, as Andrejevic points out, a cascading logic of automation at work – ‘automated data collection leads to automated data processing, which, in turn, leads to automated responses’ (Andrejevic, 2020, p.30).
The aim of this project is to provide everyday individuals with a way of understanding and responding to data in its material form. For it to be translated into something comprehensible to humans, data needs to be made less like its operational form. Most people cannot make meaning from long strings of numbers, which explains why data visualisation has become so popular. Bar graphs, scatter plots and pie charts simplify complex mathematics into solid shapes that can be interpreted by humans. Yet how these datasets are constructed and the visual method by which they are represented are deeply subjective and political processes. Furthermore, prior to that is the question of access. Much of the data that is collected about us is held within private tech firms, so how might it be accessed?
Post-representational logics signal an important shift from communicative forms concerned with documenting, witnessing and presenting to those that function operationally and are not fit for human interpretation. Operational images (Farocki in van Tomme, 2014), in particular, are coming to play an increasingly prominent and powerful role in everyday life. One only needs to think of facial recognition technology and the ways it is being employed to automate everyday processes, such as taking attendance at school, detecting criminal behaviour or tracking suspected terrorists in public settings. As McCosker and Wilken (2020) explain it is the capacity of automated vision systems to not only see but also judge, which is deeply concerning. While these systems are largely invisible they have become integral to new systems of control and power. Materialising data begins the process of deconstructing and unravelling the cascading logics of automation and post-representation, however, this is not only logistically difficult, but also deeply political.
Image: Hito Steyerl, 2013, How not to be seen: A Fucking Didactic Educational .MOV File
References:
Andrejevic, M. (2020). Automated Media. London and New York, Routledge.
Cornell, L., et al. (2018). Trevor Paglen. London and New York, Phaidon.
McCosker, A. and R. Wilken (2020). Automating Vision: The Social Impact of the New Camera Consciousness. London and New York, Routledge.
van Tomme, N. (2014) Visibility Machines: Harun Farocki and Trevor Paglen. New York: Artbook DAP
1 Comment