Quantcast

Ann Arbor Times

Wednesday, September 10, 2025

University of Michigan study highlights emotion AI distrust among marginalized groups

Webp gmuf26i3k9lvydm88by5o3mgtm5g

Santa J. Ono, President, University of Michigan - Ann Arbor | University of Michigan - Ann Arbor

Santa J. Ono, President, University of Michigan - Ann Arbor | University of Michigan - Ann Arbor

Artificial intelligence is transforming many sectors, but not everyone views its evolution as favorable. According to a new study by the University of Michigan, emotion AI—technology that claims to allow machines to read human emotions—has caused unease, especially among marginalized communities in the United States.

The study, which surveyed nearly 600 individuals, highlights discomfort with emotion AI across various applications including healthcare, the workplace, vehicles, and even children's toys. Marginalized groups, particularly minorities and people with disabilities, report significantly lower comfort levels with these technologies.

Nazanin Andalibi, assistant professor at the School of Information and lead author of the study, noted, “These comfort discrepancies highlight a pressing need to consider identity when assessing emotion AI’s societal reach.”

Despite people being somewhat more at ease with AI analyzing emotions like happiness and surprise, the general unease affects areas such as social media, job interviews, and consumer research. Alexis Shore Ingber, study co-author and research fellow at the School of Information, commented, “Emotion AI claims to infer our deepest, most private feelings. Even if these inferences are not accurate—which many experts say they are not—its rise still raises serious privacy concerns, as demonstrated through individuals’ discomfort across deployment contexts.”

A key finding of the analysis is that exploring identity factors can offer deeper insights into how comfortable people feel with emotion AI. For example, people of color generally feel more comfortable with emotion AI compared to white individuals, except in contexts like public spaces and job interviews.

Andalibi stressed that developers and policymakers need to address these discomforts and implement protective regulations for "emotion data." Comparing to international practices, Andalibi mentioned, “The European Union banned the use of emotion AI in the workplace and education recently; while that is not perfect, it is a step in the right direction, and I hope the US does better.”

The findings will be presented at the 2025 Conference on Human Factors in Computing Systems in Yokohama, Japan.