Social norm bias: residual harms of fairness-aware algorithms
Autor: | Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai |
---|---|
Rok vydání: | 2023 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computers and Society Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence Computer Networks and Communications Computers and Society (cs.CY) Machine Learning (cs.LG) Computer Science Applications Information Systems |
Zdroj: | Data Mining and Knowledge Discovery. |
ISSN: | 1573-756X 1384-5810 |
Popis: | Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race. However, these algorithms seldom account for within-group heterogeneity and biases that may disproportionately affect some members of a group. In this work, we characterize Social Norm Bias (SNoB), a subtle but consequential type of algorithmic discrimination that may be exhibited by machine learning models, even when these systems achieve group fairness objectives. We study this issue through the lens of gender bias in occupation classification. We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to inferred gender norms. When predicting if an individual belongs to a male-dominated occupation, this framework reveals that "fair" classifiers still favor biographies written in ways that align with inferred masculine norms. We compare SNoB across algorithmic fairness methods and show that it is frequently a residual bias, and post-processing approaches do not mitigate this type of bias at all. Comment: Spotlighted at the 2021 ICML Machine Learning for Data Workshop and presented at the 2021 ICML Socially Responsible Machine Learning Workshop |
Databáze: | OpenAIRE |
Externí odkaz: |