Zobrazeno 1 - 10
of 4 561
pro vyhledávání: '"An, Yige"'
Despite their superb multimodal capabilities, Vision-Language Models (VLMs) have been shown to be vulnerable to jailbreak attacks, which are inference-time attacks that induce the model to output harmful responses with tricky prompts. It is thus esse
Externí odkaz:
http://arxiv.org/abs/2410.20971
Backdoor attacks covertly implant triggers into deep neural networks (DNNs) by poisoning a small portion of the training data with pre-designed backdoor triggers. This vulnerability is exacerbated in the era of large models, where extensive (pre-)tra
Externí odkaz:
http://arxiv.org/abs/2410.19427
Autor:
Liang, Xinhui, Yue, Zongpei, Chao, Yu-Xin, Hua, Zhen-Xing, Lin, Yige, Tey, Meng Khoon, You, Li
Quantum information scrambling, which describes the propagation and effective loss of local information, is crucial for understanding the dynamics of quantum many-body systems. In general, a typical interacting system would thermalize under time evol
Externí odkaz:
http://arxiv.org/abs/2410.16174
This paper introduces a novel generalized self-imitation learning ($\textbf{GSIL}$) framework, which effectively and efficiently aligns large language models with offline demonstration data. We develop $\textbf{GSIL}$ by deriving a surrogate objectiv
Externí odkaz:
http://arxiv.org/abs/2410.10093
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models. However, existing mainstream TTA methods, predominantly operating at batch level, often exhibit suboptimal performance in complex real-world
Externí odkaz:
http://arxiv.org/abs/2410.09398
Autor:
Zhang, Jiaming, Ye, Junhong, Ma, Xingjun, Li, Yige, Yang, Yunfan, Sang, Jitao, Yeung, Dit-Yan
Due to their multimodal capabilities, Vision-Language Models (VLMs) have found numerous impactful applications in real-world scenarios. However, recent studies have revealed that VLMs are vulnerable to image-based adversarial attacks, particularly ta
Externí odkaz:
http://arxiv.org/abs/2410.05346
Large language models (LLMs) have brought a great breakthrough to the natural language processing (NLP) community, while leading the challenge of handling concurrent customer queries due to their high throughput demands. Data multiplexing addresses t
Externí odkaz:
http://arxiv.org/abs/2410.04519
Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including those triggered by adversarial suffixes. Building on p
Externí odkaz:
http://arxiv.org/abs/2410.00451
Autor:
Song, Seohyun, Jo, Eunkyul Leah, Chen, Yige, Hong, Jeen-Pyo, Kim, Kyuwon, Wee, Jin, Kang, Miyoung, Lim, KyungTae, Park, Jungyeul, Park, Chulwoo
The Sejong dictionary dataset offers a valuable resource, providing extensive coverage of morphology, syntax, and semantic representation. This dataset can be utilized to explore linguistic information in greater depth. The labeled linguistic structu
Externí odkaz:
http://arxiv.org/abs/2410.01100
Influence functions aim to quantify the impact of individual training data points on a model's predictions. While extensive research has been conducted on influence functions in traditional machine learning models, their application to large language
Externí odkaz:
http://arxiv.org/abs/2409.19998