No Classifier Left Behind: An In-depth Study of the RBF SVM Classifier's Vulnerability to Image Extraction Attacks via Confidence Information Exploitation
Autor: | Michael R. Clark, Andrew Alten, Peter Swartz, Raed M. Salih |
---|---|
Rok vydání: | 2020 |
Předmět: |
Reverse engineering
Computer science business.industry ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Training (meteorology) Machine learning computer.software_genre Class (biology) Data modeling Support vector machine Classifier (linguistics) Artificial intelligence Set (psychology) business computer Vulnerability (computing) |
Zdroj: | CogMI |
DOI: | 10.1109/cogmi50398.2020.00037 |
Popis: | Training image extraction attacks attempt to reverse engineer training images from an already trained machine learning model. Such attacks are concerning because training data can often be sensitive in nature. Recent research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we correct common misperceptions about image extraction attacks and develop a deep understanding ofwhy some trained models are vulnerable to ourattack while others are not. In particular, we use the RBFSVMclassifier to show that we can extract individual training images from models trained on thousands of images., which refutes the notion that these attacks can only extract an “average” of each class. We also show that increasing diversity of the training data set leads to more successful attacks. To the best of our knowledge, our work is the first to show semantically meaningful images extracted from the RBF SVM classifier. |
Databáze: | OpenAIRE |
Externí odkaz: |