Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Aishan Liu"'
Publikováno v:
IEEE Transactions on Multimedia. :1-12
Publikováno v:
Neurocomputing. 496:227-237
Publikováno v:
IEEE Transactions on Image Processing. 31:598-611
Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scen
Publikováno v:
Proceedings of the 30th ACM International Conference on Multimedia.
Publikováno v:
Proceedings of the 30th ACM International Conference on Multimedia.
Publikováno v:
Proceedings of the 30th ACM International Conference on Multimedia.
Publikováno v:
IEEE transactions on cybernetics.
Recently, deep neural networks have achieved promising performance for in-filling large missing regions in image inpainting tasks. They have usually adopted the standard convolutional architecture over the corrupted image, leading to meaningless cont
Autor:
Renshuai Tao, Hainan Li, Tianbo Wang, Yanlu Wei, Yifu Ding, Bowei Jin, Hongping Zhi, Xianglong Liu, Aishan Liu
Publikováno v:
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Publikováno v:
Information Sciences. 547:568-578
Deep neural networks (DNNs) are vulnerable to adversarial examples which are generated by inputs with imperceptible perturbations. Understanding adversarial robustness of DNNs has become an important issue, which would for certain result in better pr
Autor:
Dawn Song, Alan L. Yuille, Dacheng Tao, Xinyun Chen, Animashree Anandkumar, Xianglong Liu, Xun Yang, Aishan Liu, Chaowei Xiao, Yingwei Li
Publikováno v:
ACM Multimedia
Deep learning has achieved significant success in multimedia fields involving computer vision, natural language processing, and acoustics. However research in adversarial learning also shows that they are highly vulnerable to adversarial examples. Ex