Revisiting old work on digital identity

Doing a New Year tidy up on old files and folders, I came across these images. They are about Digital Identity and look at the idea that, since AI is prone to errors and biases, our digital identity likely includes errors and is constructed with biases built in.

I took a digital self portrait, printed it and cut the print into squares, deconstructing the whole into items of information. I then used these squares to reconstruct my identity, but with errors.

The titles refer to what has happened to the information in reconstruction.

While all the images are digital, the deconstruction and reconstruction are purposefully manual, which introduces its own irregularities and errors. This is a reminder that, for the moment at least, humans make AI and it is as imperfect as we are. 

For all its problems, I still like this work. I saw something similar recently but with jigsaw pieces, that looked much better. Rather like the embroidery work, there was something meditative about the slow process of putting the squares together – I used a scanner so all the squares were face down. I couldn’t actually see what I was doing and they moved at the slightest touch!

I was reminded of a talk by Milo Keller’s (Contemporary Photography and Technology) where he suggested that the point of producing objects is to slow down the hyper speed and hyper production of digital photography. This process does produce a digital image, but in a slow and reflective process.

Keller, Milo (2020) Contemporary Photography and technology. A Vimeo recording of a lecture from Self Publish, Be Happy, September 10th 2020 http://selfpublishbehappy.com/2020/05/online-masterclasses/ [Accessed 17th January 2021 – pay to rent]

Gallery

Our messy emotions vs automated empathy

Exhibition news

I’ve very pleased to say that I’ll be exhibition my work in the Crate gallery in Margate. Please come if you are in the area. I will be posting images and video from the set up and open days on Instagram @miriamcomber.

“Our Messy Emotions vs Automated Empathy” will a free exhibition at Crate from 7th to 9th June 2025.

Open :7th June 4pm to 7pm, 8th and  9th June 11am to 4pm.  

Location: Crate, 1 Bilton Square, High Street, Margate, CT9, 1EEFor more information, please visit cratespace.co.uk 

Project statement

You probably know that AI can be used to identify our faces.  It is also used identify emotions, based on our facial expressions.  This is even more problematic.  The Emotion AI (EAI) models tend to assume that we all experience the same limited range of emotions – rather like the idea of five emotions in Disney’s Inside Out – and express them in more or less the same way.  

Our emotions are more complex than this.  and we express them in different ways depending on context – we have different smiles when we’re happy and when we’re being polite; we cry about sad events and sad films. But we know and understand these differences.

EAI is flawed and simplistic, yet it is offered as solution to, for example, choosing suitable candidates for jobs, measuring the engagement of students and measuring customer satisfaction.

This series expresses my scepticism that EAI can clearly read and understand emotion and my resistance to its intrusion. I want to reject the possibility that what we feel can, and will, be categorised by EAI; that a program might catch my expression and decide I need to cheer up, calm down or would like to buy something.  

Each self-portrait shows me expressing an emotion.  I take a B&W print of the portrait and add embroidery and other stitching to hide that emotion. I also add pins and linking threads making a ‘net’ to mimic how Emotion AI might take a reading from the face.  I use stitching so that the solution to being decoded is based on craft, which is more a more human approach than tech.grounded in handiwork and tradition, and so a more. Finally, the addition of glass is about the way we are seen and see ourselves – through the screens, lenses and optic fibres of the Internet.

Context

We are living in a culture of digital surveillance. This is not something that is imposed on us. In fact we are active or passive participants.

We are immersed in surveillance through the networked devices that have become an essential part of our everyday life.  Every interaction with these devices produces data – what we type, what we like, what we look at and for how long.  This data is why we don’t pay for the internet in money.  Instead, we give tech companies this data for free and they run algorithms so they can use the data to make money.  We know that what we search for, like or share will shape what we are shown and assumptions about what we want to buy.  Everything is data.  Where we go, how we get there and how long it takes us is played back to us in digital maps.  What we buy, cook, eat and when is all data.  Even our health, from our step count, or the timing of our periods to doctors’ records, is data.  

Photography contributes to the mass of data captured.  Our face is scanned to identify us at airports, but also buy our phones and laptops.  Smart devices like doorbells, cars and vacuum cleaners collect video.  Our image, particularly that of our face, can be thought of as a special kind of data.  It not only represents our identity but also provides a window to our inner state, a window that can be used to classify, and potentially commoditise, our emotions.

Emotion need no longer be a human sense of vague, indefinable “feelings,” instead emotion is in the process of becoming a “legible,” standardized commodity (Scott 1999) that can be sold, managed, and altered to suit the needs of those in power.             

Sivek (2018) p2

Sivek, S. C. (2018) ‘Ubiquitous Emotion Analytics and How We Feel Today’ In: Daubs, M. S. and Manzerolle, V. R. (eds.) Mobile and ubiquitous media: Critical and international perspectives,. New York: Peter Lang. pp.287–301.