A sociotechnical perspective for explicit unfairness mitigation techniques for algorithm fairness

Autor: Nimisha Singh, Amita Kapoor, Neha Soni
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: International Journal of Information Management Data Insights, Vol 4, Iss 2, Pp 100259- (2024)
Druh dokumentu: article
ISSN: 2667-0968
DOI: 10.1016/j.jjimei.2024.100259
Popis: With the increasing use of artificial intelligence (AI) applications in decision making, there are heightened concerns about the fairness of such decisions. Initiatives like Responsible AI, Fair ML, Ethics in AI have provided guidelines for developing AI as an attempt to address these challenges. These approaches have been criticized for taking a top down approach by applying abstract principles to practice without taking into account the context and particularities of the algorithm development. Using the sociotechnical lens, we propose a framework for developing Fair algorithm. We apply this framework to mitigate unfairness in three distinct datasets: COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), Crimes and Community, and a synthetic dataset. Our methodology involves nonconvex optimization for regression with fairness constraints. The experimentation examines the correlation coefficient, Area Under the Curve (AUC), and Root Mean Square Error (RMSE) in relation to a fairness parameter, epsilon. Our findings suggest three objectively testable propositions namely, 1) Fairness Constraint and Predictive power, 2) Fairness Constraints and Discriminatory Ability, 3) Fairness Constraints and Prediction Accuracy.
Databáze: Directory of Open Access Journals