Zobrazeno 1 - 10
of 134
pro vyhledávání: '"Yan, Qiben"'
Smartphones and wearable devices have been integrated into our daily lives, offering personalized services. However, many apps become overprivileged as their collected sensing data contains unnecessary sensitive information. For example, mobile sensi
Externí odkaz:
http://arxiv.org/abs/2409.03796
Large Language Models (LLMs) have demonstrated great capabilities in natural language understanding and generation, largely attributed to the intricate alignment process using human feedback. While alignment has become an essential training component
Externí odkaz:
http://arxiv.org/abs/2409.00787
Autor:
Chen, Yaojian, Yan, Qiben
In this paper, we introduce a privacy-preserving stable diffusion framework leveraging homomorphic encryption, called HE-Diffusion, which primarily focuses on protecting the denoising phase of the diffusion process. HE-Diffusion is a tailored encrypt
Externí odkaz:
http://arxiv.org/abs/2403.05794
The emergence of Artificial Intelligence (AI)-driven audio attacks has revealed new security vulnerabilities in voice control systems. While researchers have introduced a multitude of attack strategies targeting voice control systems (VCS), the conti
Externí odkaz:
http://arxiv.org/abs/2312.06010
Artificial Intelligence (AI) systems such as autonomous vehicles, facial recognition, and speech recognition systems are increasingly integrated into our daily lives. However, despite their utility, these AI systems are vulnerable to a wide range of
Externí odkaz:
http://arxiv.org/abs/2311.11796
Large language models (LLMs), known for their capability in understanding and following instructions, are vulnerable to adversarial attacks. Researchers have found that current commercial LLMs either fail to be "harmless" by presenting unethical answ
Externí odkaz:
http://arxiv.org/abs/2310.02417
Speaker Verification (SV) is widely deployed in mobile systems to authenticate legitimate users by using their voice traits. In this work, we propose a backdoor attack MASTERKEY, to compromise the SV models. Different from previous attacks, we focus
Externí odkaz:
http://arxiv.org/abs/2309.06981
PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection
In this paper, we propose PhantomSound, a query-efficient black-box attack toward voice assistants. Existing black-box adversarial attacks on voice assistants either apply substitution models or leverage the intermediate model output to estimate the
Externí odkaz:
http://arxiv.org/abs/2309.06960
Publikováno v:
2023 SECON
Federated Learning (FL) is a distributed machine learning (ML) paradigm, aiming to train a global model by exploiting the decentralized data across millions of edge devices. Compared with centralized learning, FL preserves the clients' privacy by ref
Externí odkaz:
http://arxiv.org/abs/2308.06267
Recent advances in natural language processing and machine learning have led to the development of chatbot models, such as ChatGPT, that can engage in conversational dialogue with human users. However, the ability of these models to generate toxic or
Externí odkaz:
http://arxiv.org/abs/2307.09579