CryptoCredit: Securely Training Fair Models

Autor: Antigoni Polychroniadou, Jiahao Chen, Leo de Castro
Rok vydání: 2020
Předmět:
Zdroj: ICAIF
DOI: 10.48550/arxiv.2010.04840
Popis: When developing models for regulated decision making, sensitive features like age, race and gender cannot be used and must be obscured from model developers to prevent bias. However, the remaining features still need to be tested for correlation with sensitive features, which can only be done with the knowledge of those features. We resolve this dilemma using a fully homomorphic encryption scheme, allowing model developers to train linear regression and logistic regression models and test them for possible bias without ever revealing the sensitive features in the clear. We demonstrate how it can be applied to leave-one-out regression testing, and show using the adult income data set that our method is practical to run.
Comment: 8 pages
Databáze: OpenAIRE