Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

Autor: Wang, Angelina, Bai, Xuechunzi, Barocas, Solon, Blodgett, Su Lin
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: As machine learning applications proliferate, we need an understanding of their potential for harm. However, current fairness metrics are rarely grounded in human psychological experiences of harm. Drawing on the social psychology of stereotypes, we use a case study of gender stereotypes in image search to examine how people react to machine learning errors. First, we use survey studies to show that not all machine learning errors reflect stereotypes nor are equally harmful. Then, in experimental studies we randomly expose participants to stereotype-reinforcing, -violating, and -neutral machine learning errors. We find stereotype-reinforcing errors induce more experientially (i.e., subjectively) harmful experiences, while having minimal changes to cognitive beliefs, attitudes, or behaviors. This experiential harm impacts women more than men. However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity. We conclude that harm cannot be the sole guide in fairness mitigation, and propose a nuanced perspective depending on who is experiencing what harm and why.
Comment: earlier draft non-archival at EAAMO 2023
Databáze: arXiv