Zobrazeno 1 - 10
of 2 559
pro vyhledávání: '"Wang, Yajie"'
Autor:
Zou, Bo, Wang, Shaofeng, Liu, Hao, Sun, Gaoyue, Wang, Yajie, Zuo, FeiFei, Quan, Chengbin, Zhao, Youjian
Teeth localization, segmentation, and labeling in 2D images have great potential in modern dentistry to enhance dental diagnostics, treatment planning, and population-based studies on oral health. However, general instance segmentation frameworks are
Externí odkaz:
http://arxiv.org/abs/2404.01013
Adversarial transferability enables black-box attacks on unknown victim deep neural networks (DNNs), rendering attacks viable in real-world scenarios. Current transferable attacks create adversarial perturbation over the entire image, resulting in ex
Externí odkaz:
http://arxiv.org/abs/2312.06199
Deep learning techniques have implemented many unconditional image generation (UIG) models, such as GAN, Diffusion model, etc. The extremely realistic images (also known as AI-Generated Content, AIGC for short) produced by these models bring urgent n
Externí odkaz:
http://arxiv.org/abs/2310.09479
Deep neural networks (DNNs) have gain its popularity in various scenarios in recent years. However, its excellent ability of fitting complex functions also makes it vulnerable to backdoor attacks. Specifically, a backdoor can remain hidden indefinite
Externí odkaz:
http://arxiv.org/abs/2305.09677
Deep neural networks (DNNs) have made tremendous progress in the past ten years and have been applied in various critical applications. However, recent studies have shown that deep neural networks are vulnerable to backdoor attacks. By injecting mali
Externí odkaz:
http://arxiv.org/abs/2305.10596
Autor:
Lin, Yujia, Chen, Liming, Ali, Aftab, Nugent, Christopher, Ian, Cleland, Li, Rongyang, Gao, Dazhi, Wang, Hang, Wang, Yajie, Ning, Huansheng
Digital twin has recently attracted growing attention, leading to intensive research and applications. Along with this, a new research area, dubbed as "human digital twin" (HDT), has emerged. Similar to the conception of digital twin, HDT is referred
Externí odkaz:
http://arxiv.org/abs/2212.05937
Autor:
Dong, Yinpeng, Chen, Peng, Deng, Senyou, L, Lianji, Sun, Yi, Zhao, Hanyu, Li, Jiaxing, Tan, Yunteng, Liu, Xinyu, Dong, Yangyi, Xu, Enhui, Xu, Jincai, Xu, Shu, Fu, Xuelin, Sun, Changfeng, Han, Haoliang, Zhang, Xuchong, Chen, Shen, Sun, Zhimin, Cao, Junyi, Yao, Taiping, Ding, Shouhong, Wu, Yu, Lin, Jian, Wu, Tianpeng, Wang, Ye, Fu, Yu, Feng, Lin, Gao, Kangkang, Liu, Zeyu, Pang, Yuanzhe, Duan, Chengqi, Zhou, Huipeng, Wang, Yajie, Zhao, Yuhang, Wu, Shangbo, Lyu, Haoran, Lin, Zhiyu, Gao, Yifei, Li, Shuang, Wang, Haonan, Sang, Jitao, Ma, Chen, Zheng, Junhao, Li, Yijia, Shen, Chao, Lin, Chenhao, Cui, Zhichao, Liu, Guoshuai, Shi, Huafeng, Hu, Kun, Zhang, Mengxin
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zho
Externí odkaz:
http://arxiv.org/abs/2212.03412
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose clean-label backdoor attacks, which require the adversaries not to alter the labels of the poisoned training datasets. Clean-label settings make the atta
Externí odkaz:
http://arxiv.org/abs/2206.04881
Machine Learning (ML) has made unprecedented progress in the past several decades. However, due to the memorability of the training data, ML is susceptible to various attacks, especially Membership Inference Attacks (MIAs), the objective of which is
Externí odkaz:
http://arxiv.org/abs/2205.06469