Zobrazeno 1 - 10
of 560
pro vyhledávání: '"Liu, Hongfu"'
Language Language Models (LLMs) face safety concerns due to potential misuse by malicious users. Recent red-teaming efforts have identified adversarial suffixes capable of jailbreaking LLMs using the gradient-based search algorithm Greedy Coordinate
Externí odkaz:
http://arxiv.org/abs/2408.14866
Autor:
Xiao, Wenxiao, Liu, Hongfu
Active learning strategically selects informative unlabeled data points and queries their ground truth labels for model training. The prevailing assumption underlying this machine learning paradigm is that acquiring these ground truth labels will opt
Externí odkaz:
http://arxiv.org/abs/2405.17627
Shapley value-based data valuation methods, originating from cooperative game theory, quantify the usefulness of each individual sample by considering its contribution to all possible training subsets. Despite their extensive applications, these meth
Externí odkaz:
http://arxiv.org/abs/2405.17489
Influence functions serve as crucial tools for assessing sample influence in model interpretation, subset training set selection, noisy label detection, and more. By employing the first-order Taylor extension, influence functions can estimate sample
Externí odkaz:
http://arxiv.org/abs/2405.17490
A core data-centric learning challenge is the identification of training samples that are detrimental to model performance. Influence functions serve as a prominent tool for this task and offer a robust framework for assessing training data influence
Externí odkaz:
http://arxiv.org/abs/2405.03869
Traditional applications of natural language processing (NLP) in healthcare have predominantly focused on patient-centered services, enhancing patient interactions and care delivery, such as through medical dialogue systems. However, the potential of
Externí odkaz:
http://arxiv.org/abs/2402.05547
Test-Time Adaptation (TTA) is a critical paradigm for tackling distribution shifts during inference, especially in visual recognition tasks. However, while acoustic models face similar challenges due to distribution shifts in test-time speech, TTA te
Externí odkaz:
http://arxiv.org/abs/2310.09505
Autor:
Liu, Hongfu, Wang, Ye
Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability ste
Externí odkaz:
http://arxiv.org/abs/2310.08923
Fair graph partition of social networks is a crucial step toward ensuring fair and non-discriminatory treatments in unsupervised user analysis. Current fair partition methods typically consider node balance, a notion pursuing a proportionally balance
Externí odkaz:
http://arxiv.org/abs/2306.10123
Automatic Pronunciation Assessment (APA) is vital for computer-assisted language learning. Prior methods rely on annotated speech-text data to train Automatic Speech Recognition (ASR) models or speech-score data to train regression models. In this wo
Externí odkaz:
http://arxiv.org/abs/2305.19563