Zobrazeno 1 - 10
of 2 510
pro vyhledávání: '"ZHANG Jiayu"'
Autor:
Jin, Zhibo, Zhang, Jiayu, Zhu, Zhiyu, Zhang, Chenyu, Huang, Jiahao, Zhou, Jianlong, Chen, Fang
Transferable adversarial attacks pose significant threats to deep neural networks, particularly in black-box scenarios where internal model information is inaccessible. Studying adversarial attack methods helps advance the performance of defense mech
Externí odkaz:
http://arxiv.org/abs/2408.12673
Adversarial examples are a key method to exploit deep neural networks. Using gradient information, such examples can be generated in an efficient way without altering the victim model. Recent frequency domain transformation has further enhanced the t
Externí odkaz:
http://arxiv.org/abs/2408.12670
In the field of artificial intelligence, AI models are frequently described as `black boxes' due to the obscurity of their internal mechanisms. It has ignited research interest on model interpretability, especially in attribution methods that offers
Externí odkaz:
http://arxiv.org/abs/2408.07736
Autor:
Jin, Zhibo, Zhang, Jiayu, Zhu, Zhiyu, Zhang, Chenyu, Huang, Jiahao, Zhou, Jianlong, Chen, Fang
In recent times, the swift evolution of adversarial attacks has captured widespread attention, particularly concerning their transferability and other performance attributes. These techniques are primarily executed at the sample level, frequently ove
Externí odkaz:
http://arxiv.org/abs/2408.07733
Despite the exceptional performance of deep neural networks (DNNs) across different domains, they are vulnerable to adversarial samples, in particular for tasks related to computer vision. Such vulnerability is further influenced by the digital conta
Externí odkaz:
http://arxiv.org/abs/2406.07580
The robustness of deep learning models against adversarial attacks remains a pivotal concern. This study presents, for the first time, an exhaustive review of the transferability aspect of adversarial attacks. It systematically categorizes and critic
Externí odkaz:
http://arxiv.org/abs/2402.00418
Autor:
Zhu, Zhiyu, Chen, Huaming, Wang, Xinyi, Zhang, Jiayu, Jin, Zhibo, Choo, Kim-Kwang Raymond, Shen, Jun, Yuan, Dong
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data, i.e., images, text, and audio. Accordingly, its promising performance has led to the GAN-based adversarial attack
Externí odkaz:
http://arxiv.org/abs/2401.06031
Autor:
Zhu, Zhiyu, Chen, Huaming, Zhang, Jiayu, Wang, Xinyi, Jin, Zhibo, Xue, Minhui, Zhu, Dongxiao, Choo, Kim-Kwang Raymond
To better understand the output of deep neural networks (DNN), attribution based methods have been an important approach for model interpretability, which assign a score for each input dimension to indicate its importance towards the model outcome. N
Externí odkaz:
http://arxiv.org/abs/2312.13630
While deep neural networks have excellent results in many fields, they are susceptible to interference from attacking samples resulting in erroneous judgments. Feature-level attacks are one of the effective attack types, which targets the learnt feat
Externí odkaz:
http://arxiv.org/abs/2310.10427
Autor:
Zhang, Jiayu
Remote state preparation with verifiability (RSPV) is an important quantum cryptographic primitive [GV19,Zha22]. In this primitive, a client would like to prepare a quantum state (sampled or chosen from a state family) on the server side, such that i
Externí odkaz:
http://arxiv.org/abs/2310.05246