Zobrazeno 1 - 10
of 43
pro vyhledávání: '"Xu, Qiongkai"'
Embeddings-as-a-Service (EaaS) is a service offered by large language model (LLM) developers to supply embeddings generated by LLMs. Previous research suggests that EaaS is prone to imitation attacks -- attacks that clone the underlying EaaS model by
Externí odkaz:
http://arxiv.org/abs/2409.04459
Natural language processing (NLP) models may leak private information in different ways, including membership inference, reconstruction or attribute inference attacks. Sensitive information may not be explicit in the text, but hidden in underlying wr
Externí odkaz:
http://arxiv.org/abs/2406.19642
Autor:
Huang, Shuo, MacLean, William, Kang, Xiaoxi, Wu, Anqi, Qu, Lizhen, Xu, Qiongkai, Li, Zhuang, Yuan, Xingliang, Haffari, Gholamreza
Increasing concerns about privacy leakage issues in academia and industry arise when employing NLP models from third-party providers to process sensitive texts. To protect privacy before sending sensitive data to those models, we suggest sanitizing s
Externí odkaz:
http://arxiv.org/abs/2406.03749
Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training. Previous attacks established that s
Externí odkaz:
http://arxiv.org/abs/2406.00999
Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks. These attacks can manipulate the model's behavior in ways engineered by the attacker. One such tactic involves the
Externí odkaz:
http://arxiv.org/abs/2405.11575
Autor:
He, Xuanli, Wang, Jun, Xu, Qiongkai, Minervini, Pasquale, Stenetorp, Pontus, Rubinstein, Benjamin I. P., Cohn, Trevor
The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malici
Externí odkaz:
http://arxiv.org/abs/2404.19597
Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various thir
Externí odkaz:
http://arxiv.org/abs/2404.16891
While multilingual machine translation (MNMT) systems hold substantial promise, they also have security vulnerabilities. Our research highlights that MNMT systems can be susceptible to a particularly devious style of backdoor attack, whereby an attac
Externí odkaz:
http://arxiv.org/abs/2404.02393
Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP). Prior studies have shown that EaaS can be prone to model ex
Externí odkaz:
http://arxiv.org/abs/2403.01472
The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies. However, this openness also brings significant security risks, including backdoor att
Externí odkaz:
http://arxiv.org/abs/2402.19334