Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

Autor: Yang, Ziqi, Wang, Lijin, Yang, Da, Wan, Jie, Zhao, Ziming, Chang, Ee-Chien, Zhang, Fan, Ren, Kui
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
Popis: Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indistinguishable in individual shape, statistical distribution and prediction label between members and non-members. The experimental results show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss. Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks. For example, the inversion error is raised about 4+ times on the Facescrub530 classifier, and the attribute inference accuracy drops significantly when PURIFIER is deployed in our experiment.
Comment: accepted by AAAI 2023
Databáze: arXiv