Alexander
by on April 27, 2021
248 views

Emotion A.I., affective computing, and artificial emotional intelligence are all fields creating technology to understand, respond to, measure, and simulate human emotions. Hope runs so high for these endeavors that the projected market value for emotional A.I. is $91.67 billion by 2024. A few examples are revealing: The automotive industry sees value in algorithms determining when drivers are distracted and drowsy. Companies see value in algorithms analyzing how customer support agents talk and computationally coaching them to be better speakers. And researchers see value in children with autism using A.I.-infused glasses to interpret the facial reactions conveyed by people around them.

Not everyone, however, is smiling about emotion-sensing A.I. Indeed, strong criticism is pervasive and high-profile controversies are grabbing headlines. For example, there’s been considerable pushback against companies unfairly using emotion detection and analysis software during interviews to determine a candidate’s “employability score.” The AI Now research institute even called for a ban on emotion-recognition technologies “in important decisions that impact people’s lives.”

I’m excited to talk about the promises and pitfalls of emotion-sensing A.I. with Luke Stark, assistant professor in the faculty of information and media studies at the University of Western Ontario. For many years, Luke has been studying this topic, sparked widespread discussion by comparing facial recognition technology to plutonium, and has an exciting new book coming out with MIT Press, Ordering Emotion: Histories of Computing and Human Feelings From Cybernetics to AI. In addition to being a renowned scholar who studies ethical issues, Luke prioritizes ethical action. He recently turned down a prestigious and lucrative Google research scholar award as a gesture of solidarity with former Google employees Timnit Gebru and Margret Mitchell.

Our conversation has been edited and condensed for clarity.

Evan: The race is on to infuse all kinds of technologies with emotion-sensing A.I. I want your take on why it’s happening now and what people are people trying to achieve. But knowing your work, it would be a mistake to dive right in. You’re skeptical of using software to infer our emotions and core ideas that drive this ambition, like the much-repeated assertions that there are seven universal emotions.

Luke: It’s such a pleasure to talk to you about this topic, Evan. It’s critical to keep in mind that the companies involved with emotional A.I only collect proxy data — data that partially describes our complicated emotional states and responses. I probably don’t have to remind anyone reading our conversation of that basic truth! Emotions are simultaneously made up of physiological, mental, psychological, cultural, and individually subjective phenomenological components. No single measurable element of an emotional response is ever going to tell you the whole story. Philosopher Jesse Prinz calls this “the problem of parts.”

To a large degree, then, our emotional responses are inherently interpersonal. By definition, no third party, whether it’s a social media platform or education-technology software, can know for certain how you feel when you’re expressing an emotion. Humans have developed all sorts of culturally specific social conventions to make interpersonal emotional expression more predictable. But several millennia of art and literature make it clear we can’t, as they say, know what’s in someone else’s heart. Some find that fact actively comforting. Others evidently find it frustrating.

Evan: That’s a fascinating comparison. In everyday life, miscommunication can be vexing, exasperating, and sometimes have deadly consequences. But when given literary expression, the same situations, which we can observe from a somewhat comfortable distance, become dramatically compelling. Misunderstanding is the basis of fascinating plot shifts and nuanced character studies. On the lighter side, it also drives lots of comedy.

Luke: Right, and the resolution of misunderstandings, or reflection on why those resolutions didn’t or couldn’t take place, drives catharsis — releasing and thus getting relief from strong or repressed emotions. I did quite a bit of theater in college. In later chatting about it with a well-known physical computing practitioner, I asked them to observe that theater direction and interaction design are very similar processes. Material media modulate the social expression of emotion, much like dramatic conventions in the theater, which long predate digital technologies.

Evan: Any other historical emotion-directing technologies come to mind?

Luke: One of the most intriguing historical vignettes I came across while doing dissertation research was that of the “anxious seat.” I think I saw the expression “he’s in the anxious seat” in a mid-20th century newspaper clipping and had no idea what it meant. Anyone familiar with evangelical religion will know, though. The anxious seat is the front pew in a church where those particularly desirous of absolution from the preacher, or pressured into seeking it, go to sit. It’s the emotional locus for the congregation and the service.

“Emotion tech is tied tightly to the dark side of history in ways that aren’t fully appreciated.”

As it turns out, this particular innovation in church architecture and religious practice started in the 1840s during the Second Great Awakening, a major evangelical revival in the United States that produced Baptism and other similar Protestant denominations. It was part of “The New Measures,” which were deliberate changes to how services were performed to make Christianity both more emotive and more “personal.” But the more established Protestant traditions were very skeptical. They called tactics like the anxious seat “human mechanics” — deliberate technical interventions to change the power dynamics between congregation and preacher. Not sure any of this is will make it into the book. But it’s a clear example of a set of techniques and technologies for emotional manipulation long before computers.

To make a larger point, emotion tech is tied tightly to the dark side of history in ways that aren’t fully appreciated. Scholars like Otniel Dror, Simone Browne, Sianne Ngai, and Kyla Schuller unpack how the societal appropriateness of particular forms of emotional expression has consistently reinforced and perpetuated structural inequalities of power. For instance, Dror’s historical work explores how white male 19th-century European and American medical scientists settled on physiological proxies for emotional expression like heart rate or blood pressure. Because these proxies were quantifiable, they provided scientists with a comfortable language to discuss their own feelings as well as those of others. This quantitative “emotion-as-number” allowed male scientists to engage with emotion in a way they didn’t feel was feminine, and thus inferior in the hierarchy of Victorian social mores. Similarly, Ngai’s scholarship explores how particular categories of feeling understood as “ugly” are associated with racialized groups and racist hierarchies. And Simone Browne’s must-read book, Dark Matters, connects contemporary forms of biometric surveillance to the longer history of surveilling and policing the activities and feelings of Black bodies.

Since we often express emotion categories through our faces, it’s not surprising that there have been attempts for centuries to infer some greater, longer-lasting truth from these expressions. Unfortunately, they’re often tied up with the power asymmetries, and, in particular, the historical tendency of white scientists and policymakers to create the racist hierarchies that I just described. Phrenology and physiognomy, the discredited pseudosciences of the late 19th and early 20th centuries that claimed to be able to infer interior character traits from exterior physiological signs, are of course fascinated with emotional expression as a signal of inferiority or superiority in conjunction with physical features. Those ideas have reared their ugly heads again today through facial recognition technologies. But whether researchers in this space realize it or not, they’re always in danger of falling into physiognomic fallacies.

Evan: Sometimes, researchers make this mistake because they have faith in the following fallacious formulae: Good intentions plus cutting-edge technologies equal socially beneficial results. This misplaced idealism drives all kinds of naive projects. It fosters dangerous beliefs, like using automated facial-characterization systems will enable minority candidates to be treated more fairly during interviews because innovation transcends prejudice. Pointing out that prejudice can impede data selection and analysis quickly leads to a dismissive response suggesting the problems are merely temporary and not hard to remedy.

Luke: This gap between intention and reality brings us back to your question about so-called universal emotions. The current paradigm of facial emotion analysis is grounded in the Basic Emotion Theory (BET) of Paul Ekman. Ekman is a psychologist, and he did experiments in the 1970s which he claimed prove a few things: There are a limited number of basic motivating emotions; these emotions are expressed in similar ways around the world; and, a trained human observer (or later a computer) can detect someone’s “true” emotion based on facial expression, even if they’re trying to hide it. This idea inspired the TV show Lie to Me!

In a recent paper, computer scientist Jesse Hoey and I run through some of the pitfalls of combining data about different forms of emotional expression, including facial analysis, with incomplete emotional models like BET. The historian of psychology Ruth Leys really takes Ekman’s method to the woodshed in her 2017 book The Ascent of Affect. Also, psychologist Lisa Feldman Barrett and collaborators show fairly conclusively that automated facial-emotion analysis doesn’t work—a conclusion that isn’t surprising given my earlier points about the inherent multifaceted dimensions of emotion.

Evan: Since automated facial analysis is based on faulty assumptions and can generate inaccurate results, why are companies pouring so much effort and expense into the endeavor?

Luke: There are a few reasons why we’re currently at peak interest in emotion-sensing A.I. As I mentioned, the conceptual roots of contemporary models and techniques for measuring proxies for human emotional expression have been around since the late 19th century. An important thing that has changed since then is technical capacity. Now that it’s easy to collect large amounts of data on people, especially through our smartphones, computer scientists want to make sense of it all.

There’s also a belated realization in Silicon Valley that it needs a language to understand human social interaction so that tech companies can continue to profit from and shape it. Although the corporate desire is cloaked in the feel-good language of “community” and “connection,” in reality, it’s quite a parasitic impulse, as Kate Crawford and explained a few years ago when discussing how social media companies weaponize emoji.

Evan: Interesting! Can you elaborate on this point? Why has emotion become the dominant lens for social media companies to make sense of how we interact with one another? And are you saying that companies like Facebook are manipulating the public by rhetorically disguising their interest in understanding how we feel?

Luke: It’s a little perplexing, isn’t it? As I tried to think through in a recent Twitter thread, I believe it ties to a broader valorization of emotion which is, maybe, an effect of how hard it is for technoscience to get a handle on emotion in the first place. Here’s a seemingly distant, yet illustrative example. We’ve had numerous takes lately about the problem of “self-censorship,” largely from right-wing commentators or right-curious “centrists.” Many others have accurately pointed out that in most cases, “self-censorship” just means not being a mouthy jerk. But doesn’t it seem somewhat odd that tact, which in theory a pro-social virtue, has been transformed into censorship, at least for this group of people? It’s worth interrogating why that might be the case.

The sociologist Eva Illouz argues that many people increasingly see emotion, or more precisely the social expression of human emotion, as a component of identity, as the “true” manifestation of our authentic selves. Nazanin Andalibi and Justin Buss have a recent paper exploring peoples’ attitudes about emotion recognition on social media where they point out that these systems are associated with a loss of personal autonomy. That tracks to public reactions to the controversial 2014 Facebook experiment that manipulated the proportion of positive and negative emotion language in Newsfeed posts. Some upset users described the emotion contagion study as “mind control.” I think it also tracks to the longer history of our vernacular understanding of “character” and “personality,” with the transition from the former to the latter being the shift between something that’s intrinsic to something that’s performed.

I like the term “emotive expression” for this sort of emotional phenomenon. The term emoting captures the fact that emotion properly defined involves quite a bit of cognitive reflection, instead of largely unfiltered behavioral responses. And I also think this belief in emoting as authenticity thrives in our digitally mediated world, and is encouraged by the structure and business models of social media platforms, precisely because emotion in all its complexity is so hard to fully get a technical handle on. Social media platforms now employ all sorts of people who serve as, in a nice term coined by my Western colleague Alison Hearn, “feeling intermediaries”: The designers, marketers, and managers who do the work of collecting and translating emotive expression into A.I.-legible data.

As an aside, many philosophical and psychological traditions ask us not to be captive to our emotions, which, of course, doesn’t mean ignoring them. As Kyla Schuller and others have pointed out, the idea of emotional control has its own racialized history. But in the current moment, we have the valorization of emoting, the subjective experience and expression of intense feeling without much reflection.

Evan: In Re-Engineering Humanity, Brett Frischmann and I describe the emotion contagion experiment as Facebook’s attempt to up its mind control game and thereby further erode our autonomy. Our claim has nothing to do with authenticity, though. We contend the more a company can influence what people feel, the more power it can have over how they act and what they think. That’s because action and cognition are so deeply connected to emotions. After all, we’re embodied beings.

Luke: I entirely agree. There’s seemingly a paradox at play here, though. People can simultaneously feel their most authentic selves while emoting in very common or stereotypical ways, or potentially in ways most amenable to being tracked and turned into usable data. I guess one way of resolving that paradox is to say that expressing your feelings is a necessary but incomplete part of authenticity or of self-possession. In Rhetoric, Aristotle famously identifies three different ways to persuade people: through appeals to the character of the speaker, through appeals to the emotional state of the listener, or through appealing to the logic of the argument itself. Perhaps we need to be aware of our thoughts, feelings, and values all at once. Just spitballing here.

Evan: Picking up on your earlier aside, this is why, in my philosophy of peace course, I include a unit on moral emotions and injustice. I think it’s crucial for students to consider Amia Srinivasan’s view that since moral violations provide an intrinsic reason for being angry, “affective injustice” occurs when people, often minorities, are forced to choose between conveying morally appropriate condemnation and acting prudentially. For example, showing moral anger at work is a surefire way to get a poor performance review and maybe get fired. Relatedly, we discuss Myisha Cherry’s argument that moral anger is compatible with a certain type of love (agape). This love comes from wanting people to be their best selves and not be deluded into believing they’re morally superior to others and entitled to injure inferior people and groups.

Luke: Absolutely. The sociologist Eduardo Bonilla-Silva, riffing on an observation by bell hooks, says something similar. Anger is potentially useful when connected to “a passion for freedom and justice that illuminates, heals, and makes redemptive struggle possible.” Bonilla-Silva has done brilliant work interrogating the racial economy of emotion in the present. It covers how some forms of emotional expression are still penalized if they come from Black people or other visible minorities, and how the emotions of white people get valorized as “authentic” and thus incontrovertible. It sounds like a great class. How did discussing that material go?

Evan: Great! Students had many smart things to say about whether leaning into moral anger creates a risk of losing perspective and proportionality. And they understand moral anger can be crucial for motivating collective action but can also be a vulnerability that’s easy to exploit.

Luke: The point about exploitation brings us back to where I was going before referencing philosophy. If we understand, perhaps erroneously, that our feelings are the most authentic thing about ourselves and that emoting is the regular way to express our feelings, being tactful is going to feel, literally, like censorship. But let’s not forget that emotive expression through digital platforms also benefits social media platforms. It gives them internally consistent proxy data, like what Facebook Reactions icons provide, which can be used to understand social relationships between people on sites like Facebook. I don’t think social media companies are manipulating public opinion around their interest in emotive expression any more than they’re manipulating public opinion about a variety of other things. It’s just that they’re settled on definitions of words like “community” that conveniently fit with the analytics they’re doing and the kinds of data they’re eager to collect.

Evan: Are there any other significant drivers of emotion-sensing A.I.?

Luke: There’s one more thing worth mentioning. The association of emotion with A.I. also serves a useful discursive function for tech companies. Speculative fiction narratives involving artificial beings created by humans often hinge on emotion. We’re fascinated by artificial beings who can’t feel but should be able to, as in the case of Star Trek’s android, Data. We worry about what will happen if an artificial being’s feelings get out of control, like HAL 9000 from 2001: A Space Odyssey. And we’re fixated by how feelings shape the moral culpability of an artificial being’s creator, as in Mary Shelley’s Frankenstein.

In other words, we’re used to thinking about emotion and advanced artificial general intelligence together. Contemporary narrow A.I. technologies can do nothing of the kind. But it benefits tech firms if people believe by extension that current systems can “feel empathy” or similar. It simultaneously tells the public these technologies are more advanced than they really are. And it distracts from the real impacts of these systems.

Evan: Let’s shift our focus to harms. Before discussing the more tangible ways emotion-sensing A.I. can harm people, I want to ask you about intrinsic harm. Do systems that reductively classify people’s emotions based on the problematic assumptions that you’ve articulated hurt them on an intrinsic level?

Luke: Absolutely! To use terminology inspired by postcolonial theory, you could articulate the harm as one of “emotive imperialism.”

Evan: That’s a great concept. What does it mean?

Luke: It means that computational theorists and practitioners are importing controversial or even discredited theories of emotion into digital systems that presume three incorrect things. First, people can’t control their emotions. Second, our emotions are easy to decode based on a simplistic universal scheme. Third, our emotions reveal things like whether we’re guilty or innocent. Wrong. Wrong. And wrong. And I use the term “imperialism” deliberately. Anthropologists and historians know how much imperialist and neo-imperialist exploitation involves the imposition of particular “structures of feeling” on diverse populations around the world. Companies like Facebook are replicating that homogenization at a global scale. And there’s some great recent work on the neocolonial legacy of A.I. more broadly.

Evan: It’s clear many of us use emoji and other emotional symbols in a reflective way. We don’t believe they fully capture what we feel, but we know they can add context to a conversation. Does Big Tech distort our reliance on these characters and treat it as proof that users also believe in ideas like there are only a few basic universal emotions?

Luke: For sure. Of course, we know these symbols aren’t the full emotional story. And in fact, that’s why it’s so hard to reliably use the full Unicode emoji character set for sentiment analysis. There’s just too much allusion and context built into how these symbols get used. That’s why Facebook wants to make sure the emotive data it collects through things like Reactions icons is internally consistent. The company creates tools like the “feeling/activity” button that limit our options. And they can both use this data and treat our use of this palette as evidence that we believe it accurately reflects how we feel — even if this isn’t what we intend to convey. So, it’s entirely possible that companies like Facebook have propped up an unfalsifiable condition: a conjectural loop, if you will.

Let’s not forget that Facebook wasn’t always configured this way. Before it introduced Reactions in 2016, Facebook had implemented a more complex set of drop-down emoji tags in 2013. Users had been giving the company negative feedback for years about how socially inappropriate it felt to do things like use the “like” button to let friends know they’ve read tragic posts announcing things like a loved one passing away. But that initial wide selection of emoji tags didn’t get much use. In part, I suspect that’s because it wasn’t very prominent. And the company went back to the drawing board. The point is that social media platforms are working very hard to optimize for the collection of these forms of data.

Evan: Having gotten to the bottom of the intrinsic harm, what are some of the main instrumental dangers?

Luke: Let’s turn to China. Of course, we need to be careful when discussing what’s happening there. It’s much too easy to fall into Orientalist traps and misleadingly portray Chinese state or commercial behavior as proof that the West is superior by comparison. And it’s really critical to listen to experts on Chinese digital media like Angela Xiao Wu, who does brilliant work on the attention economy in the Chinese context.

But when we look at how Chinese institutions are using so-called high-quality emotion recognition technology, we find that police are treating pseudoscientific inferences about emotions that can influence whether someone is found guilty as scientifically valid. There have been pilot programs in Chinese schools where students’ emotions are scanned to determine how well they’re paying attention. And the fact these technologies are more widespread in China just stems from the fact that their emotion tech market is a little further along than ours. We already know that facial emotion recognition systems, on top of all their other problems, don’t work as well on Black faces. If that practice becomes more widespread in North America, visible minorities will bear the brunt of the sanctions associated with being classified as, for instance, not focusing intently. More widely, all kinds of students will experience chilling effects. And with the boom in online learning platforms prompted by Covid, these technologies are very close to everyday use here.

In short, what’s happening in China now might be a window into the future of how interested parties will try to apply emotion recognition technology in other parts of the world, including in North America. Companies are already doing terrible things with this technology here, like using facial characterization technologies during interviews to judge candidates.

Evan: Didn’t HireVue, the leading company that provides employers with facial characterization technology, decide to stop providing the feature?

Luke: They did, but let’s not make too much of this decision. HireVue hasn’t changed its ambitions or its business model. HireVue software has always used multiple data streams in its automated scoring system: candidate’s vocal tone and sentiment analysis of their writing as well as facial expression. So now that FRTs are getting a lot of pushback, the company has announced it can get most of the data it wants from voice and text alone. But this isn’t really an improvement! Both of those other forms of analysis are also highly problematic, just not as well-publicized.

Evan: If banning facial analysis creates a whack-a-mole problem where automation will scan and classify other bodily channels for emotional information at scale, what’s the best regulatory way forward?

Luke: Well, Jevan Hutson and I are trying to figure that out, not just for emotion recognition but for all forms of what we call “physiognomic A.I.” We’re working on a law review piece that identifies outcomes society should prohibit across all the different ways of making human bodies computationally tractable that are used to infer interior states or characteristics.

I should also give a shoutout here to some earlier work by Elaine Sedenberg and John Chuang that starts to get at these policy questions from an emotion-specific lens.

I’ll have to let you know when we finish our paper.

 

Source https://onezero.medium.com/a-i-cant-detect-our-emotions-3c1f6fce2539

Posted in: Technology
Like (1)
Loading...
1
Diamond
nice article to read, thanks Alex
April 27, 2021