Zobrazeno 1 - 10
of 645
pro vyhledávání: '"Li, Changjiang"'
Autor:
Li, Changjiang, Pang, Ren, Cao, Bochuan, Chen, Jinghui, Ma, Fenglong, Ji, Shouling, Wang, Ting
Thanks to their remarkable denoising capabilities, diffusion models are increasingly being employed as defensive tools to reinforce the security of other models, notably in purifying adversarial examples and certifying adversarial robustness. However
Externí odkaz:
http://arxiv.org/abs/2406.09669
Autor:
Chen, Jiahao, Shen, Zhiqiang, Pu, Yuwen, Zhou, Chunyi, Li, Changjiang, Li, Jiliang, Wang, Ting, Ji, Shouling
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication, highlighting their pivotal role in modern security systems. Recent studies have revealed vulnerabilities in FRS to
Externí odkaz:
http://arxiv.org/abs/2405.12786
Autor:
Chen, Yuxiao, Wu, Jingzheng, Ling, Xiang, Li, Changjiang, Rui, Zhiqing, Luo, Tianyue, Wu, Yanjun
In recent years, large language models (LLMs) have demonstrated substantial potential in addressing automatic program repair (APR) tasks. However, the current evaluation of these models for APR tasks focuses solely on the limited context of the singl
Externí odkaz:
http://arxiv.org/abs/2403.00448
Autor:
Yang, Yong, Li, Changjiang, Jiang, Yi, Chen, Xi, Wang, Haoyu, Zhang, Xuhong, Wang, Zonghui, Ji, Shouling
In recent years, "prompt as a service" has greatly enhanced the utility of large language models (LLMs) by enabling them to perform various downstream tasks efficiently without fine-tuning. This has also increased the commercial value of prompts. How
Externí odkaz:
http://arxiv.org/abs/2402.19200
Autor:
Li, Changjiang, Pang, Ren, Cao, Bochuan, Xi, Zhaohan, Chen, Jinghui, Ji, Shouling, Wang, Ting
Recent studies have shown that contrastive learning, like supervised learning, is highly vulnerable to backdoor attacks wherein malicious functions are injected into target models, only to be activated by specific triggers. However, thus far it remai
Externí odkaz:
http://arxiv.org/abs/2312.09057
Model extraction (ME) attacks represent one major threat to Machine-Learning-as-a-Service (MLaaS) platforms by ``stealing'' the functionality of confidential machine-learning models through querying black-box APIs. Over seven years have passed since
Externí odkaz:
http://arxiv.org/abs/2312.05386
Diffusion-based image generation models, such as Stable Diffusion or DALL-E 2, are able to learn from given images and generate high-quality samples following the guidance from prompts. For instance, they can be used to create artistic images that mi
Externí odkaz:
http://arxiv.org/abs/2310.19248
Autor:
Xi, Zhaohan, Du, Tianyu, Li, Changjiang, Pang, Ren, Ji, Shouling, Chen, Jinghui, Ma, Fenglong, Wang, Ting
Pre-trained language models (PLMs) have demonstrated remarkable performance as few-shot learners. However, their security risks under such settings are largely unexplored. In this work, we conduct a pilot study showing that PLMs as few-shot learners
Externí odkaz:
http://arxiv.org/abs/2309.13256
Autor:
Xi, Zhaohan, Du, Tianyu, Li, Changjiang, Pang, Ren, Ji, Shouling, Luo, Xiapu, Xiao, Xusheng, Ma, Fenglong, Wang, Ting
Knowledge graph reasoning (KGR) -- answering complex logical queries over large knowledge graphs -- represents an important artificial intelligence task, entailing a range of applications (e.g., cyber threat hunting). However, despite its surging pop
Externí odkaz:
http://arxiv.org/abs/2305.02383
Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion. In general, these parties have a group of users in common but own different features. Existing VF
Externí odkaz:
http://arxiv.org/abs/2212.00322