Evaluating the privacy exposure of interpretable global explainers

Autor: Francesca Naretto, Anna Monreale, Fosca Giannotti
Přispěvatelé: Naretto, Francesca, Monreale, Anna, Giannotti, Fosca
Jazyk: angličtina
Předmět:
Popis: In recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning models which find application in many critical contexts such as medicine, financial market and credit scoring. In such a context it is particularly important to design Trustworthy AI systems while guaranteeing transparency, with respect to their decision reasoning and privacy protection. Although many works in the literature addressed the lack of transparency and the risk of privacy exposure of Machine Learning models, the privacy risks of explainers have not been appropriately studied. This paper presents a methodology for evaluating the privacy exposure raised by interpretable global explainers able to imitate the original black-box classifier. Our methodology exploits the well-known Membership Inference Attack. The experimental results highlight that global explainers based on interpretable trees lead to an increase in privacy exposure.
Databáze: OpenAIRE