Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Xue, Zhiyu"'
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Autor:
Liu, Guangliang, Afshari, Milad, Zhang, Xitong, Xue, Zhiyu, Ghosh, Avrajit, Bashyal, Bidhan, Wang, Rongrong, Johnson, Kristen
While task-agnostic debiasing provides notable generalizability and reduced reliance on downstream data, its impact on language modeling ability and the risk of relearning social biases from downstream task-specific data remain as the two most signif
Externí odkaz:
http://arxiv.org/abs/2406.04146
Autor:
Liu, Guangliang, Mao, Haitao, Cao, Bochuan, Xue, Zhiyu, Johnson, Kristen, Tang, Jiliang, Wang, Rongrong
Large Language Models (LLMs) can improve their responses when instructed to do so, a capability known as self-correction. When these instructions lack specific details about the issues in the response, this is referred to as leveraging the intrinsic
Externí odkaz:
http://arxiv.org/abs/2406.02378
With the prevalence of the Pretraining-Finetuning paradigm in transfer learning, the robustness of downstream tasks has become a critical concern. In this work, we delve into adversarial robustness in transfer learning and reveal the critical role of
Externí odkaz:
http://arxiv.org/abs/2312.05716
Fine-tuning pretrained language models (PLMs) for downstream tasks is a large-scale optimization problem, in which the choice of the training algorithm critically determines how well the trained model can generalize to unseen test data, especially in
Externí odkaz:
http://arxiv.org/abs/2310.17588
While deep learning has been successfully applied to many real-world computer vision tasks, training robust classifiers usually requires a large amount of well-labeled data. However, the annotation is often expensive and time-consuming. Few-shot imag
Externí odkaz:
http://arxiv.org/abs/2009.03558