Zobrazeno 1 - 10
of 93
pro vyhledávání: '"Ji Yangfeng"'
Differential privacy (DP) is applied when fine-tuning pre-trained large language models (LLMs) to limit leakage of training examples. While most DP research has focused on improving a model's privacy-utility tradeoff, some find that DP can be unfair
Externí odkaz:
http://arxiv.org/abs/2410.18749
Large language models (LLMs) are now being considered and even deployed for applications that support high-stakes decision-making, such as recruitment and clinical decisions. While several methods have been proposed for measuring bias, there remains
Externí odkaz:
http://arxiv.org/abs/2408.01285
Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks. However, it is empirically found that LLMs fall short in recognizing and utilizing temporal information, rendering poor pe
Externí odkaz:
http://arxiv.org/abs/2405.02778
Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics. Counterfactual data augmentation
Externí odkaz:
http://arxiv.org/abs/2404.00463
Autor:
Du, Wanyu, Ji, Yangfeng
The development of trustworthy conversational information-seeking systems relies on dialogue models that can generate faithful and accurate responses based on relevant knowledge texts. However, two main challenges hinder this task. Firstly, language
Externí odkaz:
http://arxiv.org/abs/2311.00953
Essential for an unfettered data market is the ability to discreetly select and evaluate training data before finalizing a transaction between the data owner and model owner. To safeguard the privacy of both data and model, this process involves scru
Externí odkaz:
http://arxiv.org/abs/2310.02373
Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language
Externí odkaz:
http://arxiv.org/abs/2306.10165
Learning transferable representation of knowledge graphs (KGs) is challenging due to the heterogeneous, multi-relational nature of graph structures. Inspired by Transformer-based pretrained language models' success on learning transferable representa
Externí odkaz:
http://arxiv.org/abs/2303.15682
Publikováno v:
AAAI 2023
Recent NLP literature has seen growing interest in improving model interpretability. Along this direction, we propose a trainable neural network layer that learns a global interaction graph between words and then selects more informative words using
Externí odkaz:
http://arxiv.org/abs/2302.02016
Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of post-hoc explanations. However, the remaining question is: is the ins
Externí odkaz:
http://arxiv.org/abs/2212.05327