Exhibition news

I’ve very pleased to say that I’ll be exhibition my work in the Crate gallery in Margate. Please come if you are in the area. I will be posting images and video from the set up and open days on Instagram @miriamcomber.

“Our Messy Emotions vs Automated Empathy” will a free exhibition at Crate from 7th to 9th June 2025.

Open :7th June 4pm to 7pm, 8th and  9th June 11am to 4pm.  

Location: Crate, 1 Bilton Square, High Street, Margate, CT9, 1EEFor more information, please visit cratespace.co.uk 

A day with Jessa Fairbrother

Jessa Fairbrother is an artist working with embroidery on self-portraits. Her work is exceptional, very beautiful, very delicate and very emotional.  I strongly recommend you have a look at her website and Instagram @jessfairbrother.

While her work is a world away from mine, I felt we have common ground in stitching meaning into photographs. Jessa runs workshops in her studio in Bristol and we organised a day that would be part portfolio review and part workshop.

Portfolio review

It was great to do a portfolio review face to face. I brought all my embroidered images, pinned on foam board, with a selection of glass beads, so I could re-create the total effect, and A5 prints of the final images.  Jessa put together groups of images that worked well together.  I’m always interested in people’s view on this because I think I get blind to the images after working on them so long. For example, one of the things that never occurred to me was that the shoulders should be in about the same place in sets, so they look good together.

We also discussed developing the work. One thing that we tried was using just one type of glass on several images. I include glass to refer to the internet, but it’s a stretch for viewers to make and keeping the expression of that idea constant could help.

The conversation about the images always came back to the ‘why’ being the project. What am I trying to say, why am I making these choices and, of course, why does it matter? This informed a conversation about what the final image is and the difference between a unique image, which Fairbrother makes, and a collage, which is what I make. I’d not thought about the images being collages.

Capturing the sets and sequences.

Workshop

Jessa is an expert needlewoman and, although I’ve some experience with stitching, she was able to help me get a better understanding of the technicalities of sewing on photographic paper – the best paper, thread and needles to use and how to start and finish off.  I found I had a lot to learn. For example, while I use good quality paper, I’ve used any old adhesive (to stick on beads) and tape (to keep tension the thread) . I now understand that I need to think about using materials that are acid free and won’t rust even if the work is not going to be the final image.

Studio Visit

An added bonus was seeing Jessa’s work and her talking me through the techniques and the thoughts and inspirations behind each. The work stretched back to a piece from her early work “Conversations with my mother” made after her mother had died and she found that she could not have children herself. Across a number of works she explained how she uses thread and perforations of the image not just as decoration but as metaphor for attachment, absence and more. Her work is highly decorated, but I’d not understood until I saw the actual objects that the stitches are very small and delicate. The stitches used not complex, but the patterns are intricate and interesting. Of course, one of the key points of doing this kind of work is that there is a sort of jeopardy – mistakes cannot really be corrected. 

Overall

This was a really useful and enjoyable day. I got:

  • a considered portfolio review, not the usual 20-30 mins, so we had time to discuss both what I was trying to do and to think about the ‘why’ behind that, 
  • technical advice on stitching, materials etc
  • and the opportunity to see, hold and discuss Jessa’s own work ,

Beyond all this our conversation gave me a really interesting and useful insight into what an artistic practice means and the context of the art industry. As well as a font of knowledge, Jessa is a lovely person to meet.

Revisiting old work on digital identity

Doing a New Year tidy up on old files and folders, I came across these images. They are about Digital Identity and look at the idea that, since AI is prone to errors and biases, our digital identity likely includes errors and is constructed with biases built in.

I took a digital self portrait, printed it and cut the print into squares, deconstructing the whole into items of information. I then used these squares to reconstruct my identity, but with errors.

The titles refer to what has happened to the information in reconstruction.

While all the images are digital, the deconstruction and reconstruction are purposefully manual, which introduces its own irregularities and errors. This is a reminder that, for the moment at least, humans make AI and it is as imperfect as we are. 

For all its problems, I still like this work. I saw something similar recently but with jigsaw pieces, that looked much better. Rather like the embroidery work, there was something meditative about the slow process of putting the squares together – I used a scanner so all the squares were face down. I couldn’t actually see what I was doing and they moved at the slightest touch!

I was reminded of a talk by Milo Keller’s (Contemporary Photography and Technology) where he suggested that the point of producing objects is to slow down the hyper speed and hyper production of digital photography. This process does produce a digital image, but in a slow and reflective process.

Keller, Milo (2020) Contemporary Photography and technology. A Vimeo recording of a lecture from Self Publish, Be Happy, September 10th 2020 http://selfpublishbehappy.com/2020/05/online-masterclasses/ [Accessed 17th January 2021 – pay to rent]

Gallery

Our messy emotions vs automated empathy

Exhibition news

I’ve very pleased to say that I’ll be exhibition my work in the Crate gallery in Margate. Please come if you are in the area. I will be posting images and video from the set up and open days on Instagram @miriamcomber.

“Our Messy Emotions vs Automated Empathy” will a free exhibition at Crate from 7th to 9th June 2025.

Open :7th June 4pm to 7pm, 8th and  9th June 11am to 4pm.  

Location: Crate, 1 Bilton Square, High Street, Margate, CT9, 1EEFor more information, please visit cratespace.co.uk 

Project statement

You probably know that AI can be used to identify our faces.  It is also used identify emotions, based on our facial expressions.  This is even more problematic.  The Emotion AI (EAI) models tend to assume that we all experience the same limited range of emotions – rather like the idea of five emotions in Disney’s Inside Out – and express them in more or less the same way.  

Our emotions are more complex than this.  and we express them in different ways depending on context – we have different smiles when we’re happy and when we’re being polite; we cry about sad events and sad films. But we know and understand these differences.

EAI is flawed and simplistic, yet it is offered as solution to, for example, choosing suitable candidates for jobs, measuring the engagement of students and measuring customer satisfaction.

This series expresses my scepticism that EAI can clearly read and understand emotion and my resistance to its intrusion. I want to reject the possibility that what we feel can, and will, be categorised by EAI; that a program might catch my expression and decide I need to cheer up, calm down or would like to buy something.  

Each self-portrait shows me expressing an emotion.  I take a B&W print of the portrait and add embroidery and other stitching to hide that emotion. I also add pins and linking threads making a ‘net’ to mimic how Emotion AI might take a reading from the face.  I use stitching so that the solution to being decoded is based on craft, which is more a more human approach than tech.grounded in handiwork and tradition, and so a more. Finally, the addition of glass is about the way we are seen and see ourselves – through the screens, lenses and optic fibres of the Internet.

Context

We are living in a culture of digital surveillance. This is not something that is imposed on us. In fact we are active or passive participants.

We are immersed in surveillance through the networked devices that have become an essential part of our everyday life.  Every interaction with these devices produces data – what we type, what we like, what we look at and for how long.  This data is why we don’t pay for the internet in money.  Instead, we give tech companies this data for free and they run algorithms so they can use the data to make money.  We know that what we search for, like or share will shape what we are shown and assumptions about what we want to buy.  Everything is data.  Where we go, how we get there and how long it takes us is played back to us in digital maps.  What we buy, cook, eat and when is all data.  Even our health, from our step count, or the timing of our periods to doctors’ records, is data.  

Photography contributes to the mass of data captured.  Our face is scanned to identify us at airports, but also buy our phones and laptops.  Smart devices like doorbells, cars and vacuum cleaners collect video.  Our image, particularly that of our face, can be thought of as a special kind of data.  It not only represents our identity but also provides a window to our inner state, a window that can be used to classify, and potentially commoditise, our emotions.

Emotion need no longer be a human sense of vague, indefinable “feelings,” instead emotion is in the process of becoming a “legible,” standardized commodity (Scott 1999) that can be sold, managed, and altered to suit the needs of those in power.             

Sivek (2018) p2

Sivek, S. C. (2018) ‘Ubiquitous Emotion Analytics and How We Feel Today’ In: Daubs, M. S. and Manzerolle, V. R. (eds.) Mobile and ubiquitous media: Critical and international perspectives,. New York: Peter Lang. pp.287–301.

How Emotional AI works

The starting point for Emotional AI is the work of Paul Ekman.  Ekman was inspired by Charles Darwin, who wrote about the universality of emotions. He was also inspired by Duchenne de Boulogne, the French Neurologist best known for his grotesque photographs of people with their faces contorted into grimaces by the application of electrical current.  Duchenne was interested in physiognomy – the relationship between our physical being and our internal being.  Ekman was interested in the idea that our faces could show a specific range of emotions.  He also worked with an isolated tribe in New Guinea to gain evidence that emotions are universal.  

Ekman identified six universal emotions: fear, anger, joy, sadness, disgust, and surprise.  These have been used for a number of Emotional AI models including by Affectiva, one of the largest Emotion AI companies, now a subsidiary of a vison tech company, Smart Eye.

This is not the only model used. For example, some tools also label data based on the intensity or valence of the emotion. Some researchers also use more than the face, for example, looking at gait. It is also true that the underlying models will be improving over time.

Is AI better than humans at recognising emotions?

According to an article by (Dupré et al., 2020) the answer is “no”.  Or, as we should say with a developing field, it was “no” in 2020.  Dupré et al tested eight different commercially available classifiers and none was as good as a group of humans.  Classifiers were better at identifying simulated emotions, but still not as good as humans.  This could well be because simulated emotions may be over-acted (see images below).

The Dupré et al article gives a very good summary of the problems with classifying emotions:

  • The test used a standard set of 6 basic emotions, but in reality people have a much wider range of emotions.  (One could argue that they are combinations of the basic emotions, but that would not necessarily increase accuracy of classification).
  • Classifiers assume that there is one-to-one correspondence between emotions and facial expression.  But people display emotions in a range of social contexts which will call for different expressions and in some contexts those expressions are simulated – just because someone is smiling does not actually mean they are happy.
  • In addition, there is considerable individual difference in emotional expression
  • More recently classifiers have started to include non-effective categories e.g. interest, pain, boredom, frustration.  However, this does not address the underlying issues fully.  What it may enable classifiers to do is to provide some useful information e.g. to detect drowsiness in drivers.  But this seems a long way from providing accurate information on what people are thinking or feeling.

What does an Emotion AI training set look like?

Here are stills of video image used in ADFES – a data set of images used to test models of emotion produced by the Amsterdam Interdisciplinary Centre for Emotion (AICE) at the University of Amsterdam.  

https://psyres.uva.nl/content/research-groups/programme-group-social-psychology/adfes-stimulus-set/stimulusset.html

To my eye these look poorly acted.  The first and second expressions, starting from the left, are pretty strange.  Little wonder that models trained on images like these struggle to identify emotions.

How is Emotional AI used?

“MorphCast Interactive Video Platform can offer benefits to:

  • Digital ADV, to effectively capture attention through personalized and interactive videos, increase the emotional engagement of customers and select real people from BOTS
  • Digital learning, to personalize the learning path by monitoring students’ attendance, mood and attention, simplify and personalize the learning process
  • Entertainment, to create interactive videos, films and video clips
  • Retail and OOH applications, to personalize in-store video communication in real time and automatically collect in-depth customer data
  • e-Commerce, to personalize video communication in real time and automatically collect in-depth data from new / unregistered users
  • There are many other industries that can use MorphCast VPaaS to customize and simplify the user experience.

Affectiva Media Analytics

  • We help businesses understand how their customers and consumers feel when they can’t or won’t say so themselves. By measuring unfiltered and unbiased responses, businesses can act to improve customer experience and marketing campaigns.
  • Human Perception AI. Our software detects all things human: nuanced emotions, complex cognitive states, behaviors, activities, interactions and objects people use.

5     References 

Ekman, P. ( 2007). The directed facial action task. In J. A. Coan and J. J. B. Allen (Eds.), Handbook of Emotion Elicitation and Assessment (pp. 47-53). Oxford University Press.

Ekman, P., & Friesen, W. V., & Hager, J. C. (2002). Facial action coding system: The manual on CD-ROMInstructor’s Guide. Salt Lake City: Network Information Research Co.

Dupré, D. et al. (2020) ‘A performance comparison of eight commercially available automatic classifiers for facial affect recognition’ In: PloS one 15 (4) p.e0231968.

Emotional AI: what it is and why it bothers me

Your smartphone can probably recognise your face. We know that electronic passport gates can also do this and we often see AI using CCTV footage to identify people in fictional dramas. Emotional AI (EAI) goes one step further, capturing and decoding our facial expressions to measure what we feel. So, how do we feel about our phones checking what mood we’re in? Or any other device with a camera for that matter?

EAI was originally developed for health-related applications. For example it was trialled as a way to help people who had difficulty understanding the emotions of others. But now the main uses are commercial (Sivek, 2018).  These include solutions to tell employers whether their workers are concentrating[1], to help researchers “accurately and authentically understand the emotional response” of interviewees[2], to collect objective data on job candidates[3].  As with many AI applications, EAI captures data (our expressions), classifies it using comparison to a training set and using standardised categories or variables and outputs it’s best estimate of the emotions we are feeling.

There are legitimate concerns about EAI.

  • EAI works on the basis that emotional expression is constant across people and cultures . An alternative view is that we express emotions in ways that fit with both our culture and the situation we are in, for example by smiling when the situation calls for it rather than just when we’re happy.
  • EAI also often assume a limited core set of emotions (e.g. happiness, sadness, fear etc). This idea comes from the work of Paul Ekman and was used in Disney’s Inside Out. An alternative view is that emotions are more on continuums
  • Dupré et al (2020) checked the accuracy of a number of EAI solutions and found them to be poor, certainly compared to how well humans could identify emotions. To be fair to EAI, we can assume that models will get more ‘accurate’ over time, but what this means is they will get better matches to their training set. Whether they get better than human is different question.
  • Like all AI models, EAI depends on the quality of the data categorisation used (Crawford and Paglen, 2019). EAI has tended to use a fairly simple system based on a limited range of emotions. This means that increased accuracy does not necessarily mean that models will be a better reflection of real life.

All of this is fairly typical of AI. First, there is an assumption is that quantity of data trumps theory. Second, there is a tendency to ‘satisfice’ – to choose to aim for answers that are good enough rather than being completely accurate.

The best summary I have seen is by McStay (2018) who, to paraphrase, says that EAI replaces an understanding of emotions as ambiguous, bound up with context and culture, with a system that gives a ‘veneer of certainty’ about their classification. 

So, we are left in the situation that EAI is not terribly accurate, so we probably don’t want it being used to measure our emotional life. But, on the other hand, if the model do get more accurate, would we really want AI models looking beyond our face and into our inner life?


[1] Fujitsu Laboratories Ltd Press release: Fujitsu Develops AI Model to Deterimine Concentration During Tasks Based on Facial Expression Accessed 29/05/2023. 

[2] Information from the Affectiva website. https://www.affectiva.com Accessed 27th May 2023

[3] Information from the MorphCase website, https://www.morphcast.com Accessed 27th May 2023