Analyzing Information Leakage of Updates to Natural Language Models
Autor: | Marc Brockschmidt, Santiago Zanella-Béguelin, Andrew Paverd, Lukas Wutschitz, Victor Rühle, Olga Ohrimenko, Boris Köpf, Shruti Tople |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Cryptography and Security Computer science media_common.quotation_subject Machine Learning (stat.ML) 02 engineering and technology computer.software_genre Machine Learning (cs.LG) Statistics - Machine Learning 020204 information systems 0202 electrical engineering electronic engineering information engineering Quality (business) Differential (infinitesimal) media_common Computer Science - Computation and Language Training set Artificial neural network Information leakage 020201 artificial intelligence & image processing Language model Data mining Computation and Language (cs.CL) Cryptography and Security (cs.CR) computer Natural language |
Zdroj: | CCS |
Popis: | To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models. We show that a differential analysis of language model snapshots before and after an update can reveal a surprising amount of detailed information about changes in the training data. We propose two new metrics---\emph{differential score} and \emph{differential rank}---for analyzing the leakage due to updates of natural language models. We perform leakage analysis using these metrics across models trained on several different datasets using different methods and configurations. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect. |
Databáze: | OpenAIRE |
Externí odkaz: |