Exhibition news

I’ve very pleased to say that I’ll be exhibition my work in the Crate gallery in Margate. Please come if you are in the area. I will be posting images and video from the set up and open days on Instagram @miriamcomber.

“Our Messy Emotions vs Automated Empathy” will a free exhibition at Crate from 7th to 9th June 2025.

Open :7th June 4pm to 7pm, 8th and  9th June 11am to 4pm.  

Location: Crate, 1 Bilton Square, High Street, Margate, CT9, 1EEFor more information, please visit cratespace.co.uk 

How Emotional AI works

The starting point for Emotional AI is the work of Paul Ekman.  Ekman was inspired by Charles Darwin, who wrote about the universality of emotions. He was also inspired by Duchenne de Boulogne, the French Neurologist best known for his grotesque photographs of people with their faces contorted into grimaces by the application of electrical current.  Duchenne was interested in physiognomy – the relationship between our physical being and our internal being.  Ekman was interested in the idea that our faces could show a specific range of emotions.  He also worked with an isolated tribe in New Guinea to gain evidence that emotions are universal.  

Ekman identified six universal emotions: fear, anger, joy, sadness, disgust, and surprise.  These have been used for a number of Emotional AI models including by Affectiva, one of the largest Emotion AI companies, now a subsidiary of a vison tech company, Smart Eye.

This is not the only model used. For example, some tools also label data based on the intensity or valence of the emotion. Some researchers also use more than the face, for example, looking at gait. It is also true that the underlying models will be improving over time.

Is AI better than humans at recognising emotions?

According to an article by (Dupré et al., 2020) the answer is “no”.  Or, as we should say with a developing field, it was “no” in 2020.  Dupré et al tested eight different commercially available classifiers and none was as good as a group of humans.  Classifiers were better at identifying simulated emotions, but still not as good as humans.  This could well be because simulated emotions may be over-acted (see images below).

The Dupré et al article gives a very good summary of the problems with classifying emotions:

  • The test used a standard set of 6 basic emotions, but in reality people have a much wider range of emotions.  (One could argue that they are combinations of the basic emotions, but that would not necessarily increase accuracy of classification).
  • Classifiers assume that there is one-to-one correspondence between emotions and facial expression.  But people display emotions in a range of social contexts which will call for different expressions and in some contexts those expressions are simulated – just because someone is smiling does not actually mean they are happy.
  • In addition, there is considerable individual difference in emotional expression
  • More recently classifiers have started to include non-effective categories e.g. interest, pain, boredom, frustration.  However, this does not address the underlying issues fully.  What it may enable classifiers to do is to provide some useful information e.g. to detect drowsiness in drivers.  But this seems a long way from providing accurate information on what people are thinking or feeling.

What does an Emotion AI training set look like?

Here are stills of video image used in ADFES – a data set of images used to test models of emotion produced by the Amsterdam Interdisciplinary Centre for Emotion (AICE) at the University of Amsterdam.  

https://psyres.uva.nl/content/research-groups/programme-group-social-psychology/adfes-stimulus-set/stimulusset.html

To my eye these look poorly acted.  The first and second expressions, starting from the left, are pretty strange.  Little wonder that models trained on images like these struggle to identify emotions.

How is Emotional AI used?

“MorphCast Interactive Video Platform can offer benefits to:

  • Digital ADV, to effectively capture attention through personalized and interactive videos, increase the emotional engagement of customers and select real people from BOTS
  • Digital learning, to personalize the learning path by monitoring students’ attendance, mood and attention, simplify and personalize the learning process
  • Entertainment, to create interactive videos, films and video clips
  • Retail and OOH applications, to personalize in-store video communication in real time and automatically collect in-depth customer data
  • e-Commerce, to personalize video communication in real time and automatically collect in-depth data from new / unregistered users
  • There are many other industries that can use MorphCast VPaaS to customize and simplify the user experience.

Affectiva Media Analytics

  • We help businesses understand how their customers and consumers feel when they can’t or won’t say so themselves. By measuring unfiltered and unbiased responses, businesses can act to improve customer experience and marketing campaigns.
  • Human Perception AI. Our software detects all things human: nuanced emotions, complex cognitive states, behaviors, activities, interactions and objects people use.

5     References 

Ekman, P. ( 2007). The directed facial action task. In J. A. Coan and J. J. B. Allen (Eds.), Handbook of Emotion Elicitation and Assessment (pp. 47-53). Oxford University Press.

Ekman, P., & Friesen, W. V., & Hager, J. C. (2002). Facial action coding system: The manual on CD-ROMInstructor’s Guide. Salt Lake City: Network Information Research Co.

Dupré, D. et al. (2020) ‘A performance comparison of eight commercially available automatic classifiers for facial affect recognition’ In: PloS one 15 (4) p.e0231968.

Emotional AI: what it is and why it bothers me

Your smartphone can probably recognise your face. We know that electronic passport gates can also do this and we often see AI using CCTV footage to identify people in fictional dramas. Emotional AI (EAI) goes one step further, capturing and decoding our facial expressions to measure what we feel. So, how do we feel about our phones checking what mood we’re in? Or any other device with a camera for that matter?

EAI was originally developed for health-related applications. For example it was trialled as a way to help people who had difficulty understanding the emotions of others. But now the main uses are commercial (Sivek, 2018).  These include solutions to tell employers whether their workers are concentrating[1], to help researchers “accurately and authentically understand the emotional response” of interviewees[2], to collect objective data on job candidates[3].  As with many AI applications, EAI captures data (our expressions), classifies it using comparison to a training set and using standardised categories or variables and outputs it’s best estimate of the emotions we are feeling.

There are legitimate concerns about EAI.

  • EAI works on the basis that emotional expression is constant across people and cultures . An alternative view is that we express emotions in ways that fit with both our culture and the situation we are in, for example by smiling when the situation calls for it rather than just when we’re happy.
  • EAI also often assume a limited core set of emotions (e.g. happiness, sadness, fear etc). This idea comes from the work of Paul Ekman and was used in Disney’s Inside Out. An alternative view is that emotions are more on continuums
  • Dupré et al (2020) checked the accuracy of a number of EAI solutions and found them to be poor, certainly compared to how well humans could identify emotions. To be fair to EAI, we can assume that models will get more ‘accurate’ over time, but what this means is they will get better matches to their training set. Whether they get better than human is different question.
  • Like all AI models, EAI depends on the quality of the data categorisation used (Crawford and Paglen, 2019). EAI has tended to use a fairly simple system based on a limited range of emotions. This means that increased accuracy does not necessarily mean that models will be a better reflection of real life.

All of this is fairly typical of AI. First, there is an assumption is that quantity of data trumps theory. Second, there is a tendency to ‘satisfice’ – to choose to aim for answers that are good enough rather than being completely accurate.

The best summary I have seen is by McStay (2018) who, to paraphrase, says that EAI replaces an understanding of emotions as ambiguous, bound up with context and culture, with a system that gives a ‘veneer of certainty’ about their classification. 

So, we are left in the situation that EAI is not terribly accurate, so we probably don’t want it being used to measure our emotional life. But, on the other hand, if the model do get more accurate, would we really want AI models looking beyond our face and into our inner life?


[1] Fujitsu Laboratories Ltd Press release: Fujitsu Develops AI Model to Deterimine Concentration During Tasks Based on Facial Expression Accessed 29/05/2023. 

[2] Information from the Affectiva website. https://www.affectiva.com Accessed 27th May 2023

[3] Information from the MorphCase website, https://www.morphcast.com Accessed 27th May 2023