top of page

Disruptive technologies: Artificial intelligence (AI) and emotion recognition

The use of AI algorithms for emotion recognition may not compromise the integrity of our bodies directly, but it certainly has the potential to undermine personal autonomy, our confidence in decisions we make, and the value we ascribe to free will.


Photo credit: Getty Images

Technological inventions play on human fantasies for attainment of superpowers. The telephone, TV, cellular phones, online search engines, GPS all seem to further our reach through time and space, automatically augment our knowledge of never seen before things and places, and create a pleasant, even if embellished, sense of omnipotence.

But what would be the selling point of emotion recognition AI from the perspective of the individual user? Do we want or need to be told how we feel by a machine? Or is affective data collected by third parties, often without user consent, and serving ulterior ends?


Emotion recognition AI belongs to a new order of technological invention, which foregoes the individual empowerment of the user for the greater benefit of scientific exploration, corporate interest or social innovation. These are human behaviours and biometrics processing technologies, whose better promise to individual users would be safety, personal preference prediction, and convenience in a world that seems to be in tune with your needs and moods.

Intervyo, for example, is a recruitment platform part of the IBM Watson Ecosystem, using data driven human predictive analysis to “Predict Greatness” in prospective new hires. Its virtual interviewer is a compilation of AI algorithms which claim to assess “intrinsic behavioural attributes”, “true emotional sentiments” based on tone of voice and reveal “hidden emotional intentions” through “micro-facial gesture analysis.” One promised benefit of an interview process that captures and analyses unconscious bodily signals is an efficient and “bias-free” selection. Intervyo also claims that their technology reveals the true character and attributes of candidates, who may be able to bedazzle a human interviewer, but are rendered transparent by algorithms.


Emotion recognition AI is already being used in a number of other contexts. Police in China and the UK are using it to identify suspicious people based on their emotional states. Audi, Volvo and Kia Motors employ emotion recognition to evaluate passenger experience and driver alertness. During the 2020 and 2021 pandemic lockdowns an emotion recognition tool gained traction in schools across Hong Kong as it helped teachers be more aware of students’ reactions in an online setting (CNN Business, February 17, 2021). The software used in this case is 4 Little Trees and was created by the Hong Kong-based start-up Find Solution AI. It reads students’ facial expressions while they work on homework and adapts the prompts to keep them engaged. The founder of Find Solution AI, Viola Lam, claims that emotion recognition is of great value for both in-class and distance learning. She proposes, as well, that a similar product can be useful in a business setting “to better understand participants' needs and increase engagement in online meetings and webinars”.


It is easy to see how automated human resource management may be beneficial for large corporations, as well as governing and law enforcement institutions, but AI technologies that transform the body into data source are strictly intended for surveillance. A dominant theme in their marketing campaigns, to both individuals and companies, is the superior capacities of machines over those of humans. Machines are faster, more accurate, more reliable, and sincere. We, humans, have to rely on them to know us better than we know ourselves, predict our needs, and decide our lives in our best interest. Ontario Tech Professor Isabel Pedersen, Canada Research Chair in Digital Life, Media, and Culture and author of Ready to Wear: A Rhetoric of Wearable Computers and Reality-Shifting Media (Anderson, SC: Parlor Press), has called this type of rhetoric “dehumanizing” since it describes humans as deficient in order to assert the necessity and inevitability of emergent technologies.


As an additional layer of facial recognition, emotion recognition brings privacy concerns to a new high. Purportedly, not only is a person identifiable based on facial features, but their facial expressions can now be used to read their emotional reactions and intentions. Even if technically possible, such degree of transparency of human motive would be ethically problematic and detrimental to the democratic foundations of society. However, many scientists have questioned the predictive value of the metrics of emotion recognition technologies (ERT). Dr. Alexa Hagerty from the Leverhume Centre for the Future of Intelligence in Cambridge, UK claims that “ERT is built on shaky scientific ground”. As she notes, the theory of “basic emotions” that founds most ERT holds that emotions are biologically hard-wired and uniformly expressed across cultures. This is a claim widely disproved by anthropologists. There is also no scientific confirmation that a “person’s emotional state can be inferred by their facial movements” (The Conversation, April 15, 2021). Her game called “Is emotion recognition “emojifying” you?” demonstrates that very well:





Kate Crawford, a scholar, human rights advocate and founder of the AI Now Institute in New York also says that “AI is neither artificial nor intelligent” (The Guardian, June 6, 2021). She explains that AI algorithms work according to the embedded “logics of classification” or “how they are built and trained to see the world”. Moreover, her analysis of classification terms in AI training data sets has revealed that, far from being bias-free, “pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more”.


Personally, I would gladly pass on a Brave New World of AI in-tuned with my emotions in order to avoid being misclassified or being told what I feel, what I am capable of, and what my intentions are. The more insidious issue is that most of the time emotion recognition algorithms are already embedded in applications without our knowledge or we might have to agree to it if we want a job in a company that uses such assessments as part of its recruitment process. This new disruptive technology may not compromise the integrity of our bodies directly, but it certainly has the potential to undermine personal autonomy, our confidence in our own decision making and the value we ascribe to free will.


Dr. Lyuba Encheva is a Research Associate at the Digital Life Institute at OTU and her research on emotion recognition technologies was funded by SSHRC. More information about her research on the topic can be found at codinghappiness.org.

1 Comment


Mrse1982
Feb 09, 2023

Getting free bitcoins can be done in several ways, but it's important to keep in mind that there's no such thing as a free lunch when it comes to cryptocurrency. Most methods for earning free bitcoins require some level of effort, such as completing tasks, participating in surveys, or playing games. One popular method for earning free bitcoins is through faucets, which are websites that give away small amounts of bitcoin in exchange for completing simple tasks, such as solving captchas or watching advertisements. Another option is to earn bitcoins through affiliate marketing, where you can earn a commission for referring others to certain services or products. There are also several websites and platforms that offer free bitcoins for participating in games,…

Like
bottom of page