Differentially Private and Fair Machine Learning: A Benchmark Study

Autor: Eponeshnikov, Alexander, Bakhtadze, Natalia, Smirnova, Gulnara, Sabitov, Rustem, Sabitov, Shamil
Zdroj: IFAC-PapersOnLine; January 2024, Vol. 58 Issue: 19 p277-282, 6p
Abstrakt: With the increasing adoption of machine learning systems, concerns around bias and privacy have gained significant research interest. This work investigates the intersection of algorithmic fairness and differential privacy by evaluating differentially private fair representations. The LAFTR framework aims to learn fair data representations while maintaining utility. Differential privacy is injected into model training using DP-SGD to provide formal privacy guarantees. Experiments are conducted on the Adult, German Credit, and CelebA datasets, with gender and age as sensitive attributes. The models are evaluated across various configurations, including the privacy budget epsilon, adversary strength, and dataset characteristics. Results demonstrate that with proper tuning, differentially private models can achieve fair representations comparable or better than non-private models. However, introducing privacy reduces stability during training. Overall, the analysis provides insights into the tradeoffs between accuracy, fairness and privacy for different model configurations across datasets. The results establish a benchmark for further research into differentially private and fair machine learning models, advancing the understanding of training under an adversary.
Databáze: Supplemental Index