Zobrazeno 1 - 10
of 279
pro vyhledávání: '"Pang, Lu"'
Recently, backdoor attack has become an increasing security threat to deep neural networks and drawn the attention of researchers. Backdoor attacks exploit vulnerabilities in third-party pretrained models during the training phase, enabling them to b
Externí odkaz:
http://arxiv.org/abs/2410.12955
Autor:
Lyu, Weimin, Yao, Jiachen, Gupta, Saumya, Pang, Lu, Sun, Tao, Yi, Lingjie, Hu, Lijie, Ling, Haibin, Chen, Chao
The emergence of Vision-Language Models (VLMs) represents a significant advancement in integrating computer vision with Large Language Models (LLMs) to generate detailed text descriptions from visual inputs. Despite their growing importance, the secu
Externí odkaz:
http://arxiv.org/abs/2410.01264
The emergence of Vision Language Models (VLMs) is a significant advancement in integrating computer vision with Large Language Models (LLMs) to produce detailed text descriptions based on visual inputs, yet it introduces new security vulnerabilities.
Externí odkaz:
http://arxiv.org/abs/2409.19232
Textual backdoor attacks pose significant security threats. Current detection approaches, typically relying on intermediate feature representation or reconstructing potential triggers, are task-specific and less effective beyond sentence classificati
Externí odkaz:
http://arxiv.org/abs/2403.17155
Recent studies have revealed that \textit{Backdoor Attacks} can threaten the safety of natural language processing (NLP) models. Investigating the strategies of backdoor attacks will help to understand the model's vulnerability. Most existing textual
Externí odkaz:
http://arxiv.org/abs/2310.14480
Dissertation/ Thesis
Autor:
Pang, Lu
Modern storage systems need to deal with an increasing amount of data volume, however, only a very small fraction of the data is needed by the currently running applications. The growing availability of low-cost high-capacity storage devices and the
Externí odkaz:
http://hdl.handle.net/20.500.12613/8568
Deep neural networks are vulnerable to backdoor attacks, where an adversary maliciously manipulates the model behavior through overlaying images with special triggers. Existing backdoor defense methods often require accessing a few validation data an
Externí odkaz:
http://arxiv.org/abs/2303.15564
Due to the increasing computational demand of Deep Neural Networks (DNNs), companies and organizations have begun to outsource the training process. However, the externally trained DNNs can potentially be backdoor attacked. It is crucial to defend ag
Externí odkaz:
http://arxiv.org/abs/2211.12044
Autor:
Chen, Qi, Zhou, Zeyan, Cai, Sulin, Lv, Meiqi, Yang, Yinghui, Luo, Yunchao, Jiang, Han, Liu, Run, Cao, Tingting, Yao, Bei, Chen, Yunru, Li, Qiang, Zeng, Xiaoyi, Ye, Rumeng, Fang, You, Pan, Yueting, He, Weihua, Pang, Lu, He, Hualong, Wan, Pengwei, Ji, Yanli, Li, Changzhong, Jin, Cheng, Baidourela, Aliya, Zeng, Jiaqin, Pu, Gaozhong, Chen, Siyuan, Liang, Jiawen, Tian, Xingjun
Publikováno v:
In Soil & Tillage Research January 2024 235
Publikováno v:
Machine Learning with Applications, Vol 14, Iss , Pp 100494- (2023)
Resilience in the context of the ongoing COVID-19 pandemic has emerged as a critical public health concern for the elderly population. However, the extent to which a structured model can effectively determine resilience among older adults remains unc
Externí odkaz:
https://doaj.org/article/f44f397670ad42ca8cce2540d86c3710