Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models

Autor: Fernandez, Virginia, Sanchez, Pedro, Pinaya, Walter Hugo Lopez, Jacenków, Grzegorz, Tsaftaris, Sotirios A., Cardoso, Jorge
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: Knowledge distillation in neural networks refers to compressing a large model or dataset into a smaller version of itself. We introduce Privacy Distillation, a framework that allows a text-to-image generative model to teach another model without exposing it to identifiable data. Here, we are interested in the privacy issue faced by a data provider who wishes to share their data via a multimodal generative model. A question that immediately arises is ``How can a data provider ensure that the generative model is not leaking identifiable information about a patient?''. Our solution consists of (1) training a first diffusion model on real data (2) generating a synthetic dataset using this model and filtering it to exclude images with a re-identifiability risk (3) training a second diffusion model on the filtered synthetic data only. We showcase that datasets sampled from models trained with privacy distillation can effectively reduce re-identification risk whilst maintaining downstream performance.
Databáze: arXiv