Private Deep Neural Network Models Publishing for Machine Learning as a Service
Autor: | Zhifei Zhu, Yuan Zhang, Sheng Zhong, Yunlong Mao, Wenbo Hong, Boyu Zhu |
---|---|
Rok vydání: | 2020 |
Předmět: |
Service (systems architecture)
Artificial neural network business.industry Computer science media_common.quotation_subject Deep learning 05 social sciences 010501 environmental sciences Service provider Inference attack Machine learning computer.software_genre 01 natural sciences 0502 economics and business Artificial noise Differential privacy Quality (business) Artificial intelligence 050207 economics business computer 0105 earth and related environmental sciences media_common |
Zdroj: | IWQoS |
DOI: | 10.1109/iwqos49365.2020.9212853 |
Popis: | Machine learning as a service has emerged recently to relieve tensions between heavy deep learning tasks and increasing application demands. A deep learning service provider could help its clients to benefit from deep learning techniques at an affordable price instead of huge resource consumption. However, the service provider may have serious concerns about model privacy when a deep neural network model is published. Previous model publishing solutions mainly depend on additional artificial noise. By adding elaborated noises to parameters or gradients during the training phase, strong privacy guarantees like differential privacy could be achieved. However, this kind of approach cannot give guarantees on some other aspects, such as the quality of the disturbingly trained model and the convergence of the modified learning algorithm. In this paper, we propose an alternative private deep neural network model publishing solution, which caused no interference in the original training phase. We provide privacy, convergence and quality guarantees for the published model at the same time. Furthermore, our solution can achieve a smaller privacy budget when compared with artificial noise based training solutions proposed in previous works. Specifically, our solution gives an acceptable test accuracy with privacy budget ∊ = 1. Meanwhile, membership inference attack accuracy will be deceased from nearly 90% to around 60% across all classes. |
Databáze: | OpenAIRE |
Externí odkaz: |