Emotional AI: what it is and why it bothers me

Your smartphone can probably recognise your face. We know that electronic passport gates can also do this and we often see AI using CCTV footage to identify people in fictional dramas. Emotional AI (EAI) goes one step further, capturing and decoding our facial expressions to measure what we feel. So, how do we feel about our phones checking what mood we’re in? Or any other device with a camera for that matter?

EAI was originally developed for health-related applications. For example it was trialled as a way to help people who had difficulty understanding the emotions of others. But now the main uses are commercial (Sivek, 2018).  These include solutions to tell employers whether their workers are concentrating[1], to help researchers “accurately and authentically understand the emotional response” of interviewees[2], to collect objective data on job candidates[3].  As with many AI applications, EAI captures data (our expressions), classifies it using comparison to a training set and using standardised categories or variables and outputs it’s best estimate of the emotions we are feeling.

There are legitimate concerns about EAI.

  • EAI works on the basis that emotional expression is constant across people and cultures . An alternative view is that we express emotions in ways that fit with both our culture and the situation we are in, for example by smiling when the situation calls for it rather than just when we’re happy.
  • EAI also often assume a limited core set of emotions (e.g. happiness, sadness, fear etc). This idea comes from the work of Paul Ekman and was used in Disney’s Inside Out. An alternative view is that emotions are more on continuums
  • Dupré et al (2020) checked the accuracy of a number of EAI solutions and found them to be poor, certainly compared to how well humans could identify emotions. To be fair to EAI, we can assume that models will get more ‘accurate’ over time, but what this means is they will get better matches to their training set. Whether they get better than human is different question.
  • Like all AI models, EAI depends on the quality of the data categorisation used (Crawford and Paglen, 2019). EAI has tended to use a fairly simple system based on a limited range of emotions. This means that increased accuracy does not necessarily mean that models will be a better reflection of real life.

All of this is fairly typical of AI. First, there is an assumption is that quantity of data trumps theory. Second, there is a tendency to ‘satisfice’ – to choose to aim for answers that are good enough rather than being completely accurate.

The best summary I have seen is by McStay (2018) who, to paraphrase, says that EAI replaces an understanding of emotions as ambiguous, bound up with context and culture, with a system that gives a ‘veneer of certainty’ about their classification. 

So, we are left in the situation that EAI is not terribly accurate, so we probably don’t want it being used to measure our emotional life. But, on the other hand, if the model do get more accurate, would we really want AI models looking beyond our face and into our inner life?


[1] Fujitsu Laboratories Ltd Press release: Fujitsu Develops AI Model to Deterimine Concentration During Tasks Based on Facial Expression Accessed 29/05/2023. 

[2] Information from the Affectiva website. https://www.affectiva.com Accessed 27th May 2023

[3] Information from the MorphCase website, https://www.morphcast.com Accessed 27th May 2023

Leave a comment