Protecting image privacy through adversarial perturbation.

Autor: Liang, Baoyu, Tong, Chao, Lang, Chao, Wang, Qinglong, Rodrigues, Joel J. P. C, Kozlov, Sergei
Předmět:
Zdroj: Multimedia Tools & Applications; Oct2022, Vol. 81 Issue 24, p34759-34774, 16p
Abstrakt: In current digital era, users of various social media upload photos which usually contain tremendous amount of private information on daily basis. Though the private information contained within photos can assist enterprises to provide users with better services, it is also at the risk of being disclosed. Especially, with deep learning techniques developed for object detection tasks, users' privacy can be extracted with no difficulty. Therefore, we propose an approach to prevent DNN detectors from detecting private objects, especially human body. An algorithm is developed by exploiting an inherent vulnerability of deep learning models known as the adversarial sample problem, and is integrated under a general framework which is also proposed in this work. We evaluate our method on the task of reducing the performance of DNN detectors on PASCAL VOC dataset. Our proposed algorithm can reduce the recall of human detection from 81.1% to 18.0%, while having few effects on pixel value. The results show that our proposed method performs remarkably well on preventing privacy from being exposed by DNN detectors, while causing very limited degradation to the visual quality of images. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index