Learning Robust and Privacy-Preserving Representations via Information Theory

Autor: Zhang, Binghui, Noorbakhsh, Sayedeh Leila, Dong, Yun, Hong, Yuan, Wang, Binghui
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Machine learning models are vulnerable to both security attacks (e.g., adversarial examples) and privacy attacks (e.g., private attribute inference). We take the first step to mitigate both the security and privacy attacks, and maintain task utility as well. Particularly, we propose an information-theoretic framework to achieve the goals through the lens of representation learning, i.e., learning representations that are robust to both adversarial examples and attribute inference adversaries. We also derive novel theoretical results under our framework, e.g., the inherent trade-off between adversarial robustness/utility and attribute privacy, and guaranteed attribute privacy leakage against attribute inference adversaries.
Databáze: arXiv