Loss-Aversively Fair Classification
Autor: | Adish Singla, Muhammad Bilal Zafar, Junaid Ali, Krishna P. Gummadi |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Status quo Computer science media_common.quotation_subject 02 engineering and technology 010501 environmental sciences Affect (psychology) Machine learning computer.software_genre Behavioral economics 01 natural sciences Machine Learning (cs.LG) Computer Science - Computers and Society Prospect theory Computers and Society (cs.CY) 0202 electrical engineering electronic engineering information engineering 10. No inequality Proxy (statistics) 0105 earth and related environmental sciences media_common business.industry Variety (cybernetics) 020201 artificial intelligence & image processing Artificial intelligence business computer |
Zdroj: | Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society-AIES 19 Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society -AIES '19 AIES Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society AIES'19 |
DOI: | 10.48550/arxiv.2105.04273 |
Popis: | The use of algorithmic (learning-based) decision making in scenarios that affect human lives has motivated a number of recent studies to investigate such decision making systems for potential unfairness, such as discrimination against subjects based on their sensitive features like gender or race. However, when judging the fairness of a newly designed decision making system, these studies have overlooked an important influence on people's perceptions of fairness, which is how the new algorithm changes the status quo, i.e., decisions of the existing decision making system. Motivated by extensive literature in behavioral economics and behavioral psychology (prospect theory), we propose a notion of fair updates that we refer to as loss-averse updates. Loss-averse updates constrain the updates to yield improved (more beneficial) outcomes to subjects compared to the status quo. We propose tractable proxy measures that would allow this notion to be incorporated in the training of a variety of linear and non-linear classifiers. We show how our proxy measures can be combined with existing measures for training nondiscriminatory classifiers. Our evaluation using synthetic and real-world datasets demonstrates that the proposed proxy measures are effective for their desired tasks. Comment: 8 pages, Accepted at AIES 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |