Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Zheng, Xufei"'
Autor:
Zhang, Xiaomei, Zhang, Zhaoxi, Zhong, Qi, Zheng, Xufei, Zhang, Yanjun, Hu, Shengshan, Zhang, Leo Yu
Adversarial attacks are a serious threat to the reliable deployment of machine learning models in safety-critical applications. They can misguide current models to predict incorrectly by slightly modifying the inputs. Recently, substantial work has s
Externí odkaz:
http://arxiv.org/abs/2304.08767
The usage of deep learning is being escalated in many applications. Due to its outstanding performance, it is being used in a variety of security and privacy-sensitive areas in addition to conventional applications. One of the key aspects of deep lea
Externí odkaz:
http://arxiv.org/abs/2205.06986
Publikováno v:
In Biomedical Signal Processing and Control October 2024 96 Part A
Publikováno v:
In Displays September 2024 84
Deep learning models are known to be vulnerable to adversarial examples that are elaborately designed for malicious purposes and are imperceptible to the human perceptual system. Autoencoder, when trained solely over benign examples, has been widely
Externí odkaz:
http://arxiv.org/abs/2105.03689
Autor:
Peng, PeiQiang *, Zheng, XuFei *, Wang, YueTing, Jiang, ShuNing, Chen, JiaJu, Sui, Xin, Zhao, LiJing, Xu, Haiyan, Lu, Yuming, Zhang, Shuang
Publikováno v:
In Archives of Physical Medicine and Rehabilitation April 2024
Publikováno v:
2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom).
Deep learning models are known to be vulnerable to adversarial examples that are elaborately designed for malicious purposes and are imperceptible to the human perceptual system. Autoencoder, when trained solely over benign examples, has been widely
Publikováno v:
In Neurocomputing 3 March 2015 151 Part 3:1477-1485
Publikováno v:
In Biomedical Signal Processing and Control May 2014 11:10-16
Publikováno v:
Journal of Supercomputing. Dec2012, Vol. 62 Issue 3, p1451-1479. 29p.