Our messy emotions vs automated empathy

Automated Empathy refers to AI models that code and mimic our emotions based on, among other signs, our facial expressions.  We humans understand emotions imperfectly; they are messy, complex and ambiguous.  In coding them, AI replaces ambiguity with an illusion of certainty.  

This series expresses my resistance to Automated Empathy, contrasting the imperfection of the handmade with the seemingly perfect, but ultimately unfathomable, digital.  Each image starts with digital self-portrait where I act out an emotion.  I stitch into a print of the image to cover that emotion and add a ‘net’ of threads mimicking how AI might read the face.  Glass beads add a reference to tech’s screens, lenses and optic fibre.  

Gallery

Our messy emotions vs automated empathy

Exhibition news

I’ve very pleased to say that I’ll be exhibition my work in the Crate gallery in Margate. Please come if you are in the area. I will be posting images and video from the set up and open days on Instagram @miriamcomber.

“Our Messy Emotions vs Automated Empathy” will a free exhibition at Crate from 7th to 9th June 2025.

Open :7th June 4pm to 7pm, 8th and  9th June 11am to 4pm.  

Location: Crate, 1 Bilton Square, High Street, Margate, CT9, 1EEFor more information, please visit cratespace.co.uk 

Project statement

You probably know that AI can be used to identify our faces.  It is also used identify emotions, based on our facial expressions.  This is even more problematic.  The Emotion AI (EAI) models tend to assume that we all experience the same limited range of emotions – rather like the idea of five emotions in Disney’s Inside Out – and express them in more or less the same way.  

Our emotions are more complex than this.  and we express them in different ways depending on context – we have different smiles when we’re happy and when we’re being polite; we cry about sad events and sad films. But we know and understand these differences.

EAI is flawed and simplistic, yet it is offered as solution to, for example, choosing suitable candidates for jobs, measuring the engagement of students and measuring customer satisfaction.

This series expresses my scepticism that EAI can clearly read and understand emotion and my resistance to its intrusion. I want to reject the possibility that what we feel can, and will, be categorised by EAI; that a program might catch my expression and decide I need to cheer up, calm down or would like to buy something.  

Each self-portrait shows me expressing an emotion.  I take a B&W print of the portrait and add embroidery and other stitching to hide that emotion. I also add pins and linking threads making a ‘net’ to mimic how Emotion AI might take a reading from the face.  I use stitching so that the solution to being decoded is based on craft, which is more a more human approach than tech.grounded in handiwork and tradition, and so a more. Finally, the addition of glass is about the way we are seen and see ourselves – through the screens, lenses and optic fibres of the Internet.

Context

We are living in a culture of digital surveillance. This is not something that is imposed on us. In fact we are active or passive participants.

We are immersed in surveillance through the networked devices that have become an essential part of our everyday life.  Every interaction with these devices produces data – what we type, what we like, what we look at and for how long.  This data is why we don’t pay for the internet in money.  Instead, we give tech companies this data for free and they run algorithms so they can use the data to make money.  We know that what we search for, like or share will shape what we are shown and assumptions about what we want to buy.  Everything is data.  Where we go, how we get there and how long it takes us is played back to us in digital maps.  What we buy, cook, eat and when is all data.  Even our health, from our step count, or the timing of our periods to doctors’ records, is data.  

Photography contributes to the mass of data captured.  Our face is scanned to identify us at airports, but also buy our phones and laptops.  Smart devices like doorbells, cars and vacuum cleaners collect video.  Our image, particularly that of our face, can be thought of as a special kind of data.  It not only represents our identity but also provides a window to our inner state, a window that can be used to classify, and potentially commoditise, our emotions.

Emotion need no longer be a human sense of vague, indefinable “feelings,” instead emotion is in the process of becoming a “legible,” standardized commodity (Scott 1999) that can be sold, managed, and altered to suit the needs of those in power.             

Sivek (2018) p2

Sivek, S. C. (2018) ‘Ubiquitous Emotion Analytics and How We Feel Today’ In: Daubs, M. S. and Manzerolle, V. R. (eds.) Mobile and ubiquitous media: Critical and international perspectives,. New York: Peter Lang. pp.287–301.