Autor: |
Eponeshnikov, Alexander, Sabitov, Rustem, Smirnova, Gulnara, Sabitov, Shamil |
Předmět: |
|
Zdroj: |
Advances in Systems Science & Applications; 2023, Vol. 23 Issue 4, p41-59, 20p |
Abstrakt: |
This paper investigates balancing accuracy, fairness and privacy in machine learning through adversarial learning. Differential privacy (DP) provides strong guarantees for protecting individual privacy in datasets. However, DP can impact model accuracy and fairness of decisions. This paper explores the effect of integrating DP into the adversarial learning framework called LAFTR (Learning Adversarially Fair and Transferable Representations) on fairness and accuracy metrics. Experiments were conducted using the Adult income dataset to classify individuals into high vs low income groups based on features like age, education etc. Gender was considered a sensitive attribute. Models were trained with different levels of DP noise (controlled by the epsilon hyperparameter) added to different modules like the encoder, classifier and adversary. Results show that adding DP consistently improves fairness metrics like demographic parity and equalized odds by 3-5% compared to an unfair classifier, albeit at a cost of 1-3% reduction in accuracy. Stronger adversary models further improve fairness but require careful tuning to avoid instability during training. Overall, with proper configuration, DP models can achieve high fairness with minimal sacrifice of accuracy compared to an unfair classifier. The study provides insights into balancing competing objectives of privacy, fairness and accuracy in machine learning models. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|