Mathematical Notions vs. Human Perception of Fairness
Autor: | Andreas Krause, Megha Srivastava, Hoda Heidari |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer science business.industry media_common.quotation_subject Perspective (graphical) 02 engineering and technology Machine learning computer.software_genre Domain (software engineering) Computer Science - Computers and Society 020204 information systems Perception Computers and Society (cs.CY) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence Descriptive research Set (psychology) business computer media_common |
Zdroj: | KDD |
Popis: | Fairness for Machine Learning has received considerable attention, recently. Various mathematical formulations of fairness have been proposed, and it has been shown that it is impossible to satisfy all of them simultaneously. The literature so far has dealt with these impossibility results by quantifying the tradeoffs between different formulations of fairness. Our work takes a different perspective on this issue. Rather than requiring all notions of fairness to (partially) hold at the same time, we ask which one of them is the most appropriate given the societal domain in which the decision-making model is to be deployed. We take a descriptive approach and set out to identify the notion of fairness that best captures \emph{lay people's perception of fairness}. We run adaptive experiments designed to pinpoint the most compatible notion of fairness with each participant's choices through a small number of tests. Perhaps surprisingly, we find that the most simplistic mathematical definition of fairness---namely, demographic parity---most closely matches people's idea of fairness in two distinct application scenarios. This conclusion remains intact even when we explicitly tell the participants about the alternative, more complicated definitions of fairness, and we reduce the cognitive burden of evaluating those notions for them. Our findings have important implications for the Fair ML literature and the discourse on formalizing algorithmic fairness. |
Databáze: | OpenAIRE |
Externí odkaz: |