Zobrazeno 1 - 10
of 965
pro vyhledávání: '"Johnson, Kristen"'
Self-correction is one of the most amazing emerging capabilities of Large Language Models (LLMs), enabling LLMs to self-modify an inappropriate output given a natural language feedback which describes the problems of that output. Moral self-correctio
Externí odkaz:
http://arxiv.org/abs/2410.23496
Though intensive attentions to the self-correction capability of Large Language Models (LLMs), the underlying mechanism of this capability is still under-explored. In this paper, we aim to answer two fundamental questions for moral self-correction: (
Externí odkaz:
http://arxiv.org/abs/2410.20513
Large Language Models (LLMs) are capable of producing content that perpetuates stereotypes, discrimination, and toxicity. The recently proposed moral self-correction is a computationally efficient method for reducing harmful content in the responses
Externí odkaz:
http://arxiv.org/abs/2407.15286
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Autor:
Liu, Guangliang, Afshari, Milad, Zhang, Xitong, Xue, Zhiyu, Ghosh, Avrajit, Bashyal, Bidhan, Wang, Rongrong, Johnson, Kristen
While task-agnostic debiasing provides notable generalizability and reduced reliance on downstream data, its impact on language modeling ability and the risk of relearning social biases from downstream task-specific data remain as the two most signif
Externí odkaz:
http://arxiv.org/abs/2406.04146
Autor:
Liu, Guangliang, Mao, Haitao, Cao, Bochuan, Xue, Zhiyu, Zhang, Xitong, Wang, Rongrong, Tang, Jiliang, Johnson, Kristen
Large Language Models (LLMs) are able to improve their responses when instructed to do so, a capability known as self-correction. When instructions provide only the task's goal without specific details about potential issues in the response, LLMs mus
Externí odkaz:
http://arxiv.org/abs/2406.02378
In-Context Learning (ICL) empowers Large Language Models (LLMs) with the capacity to learn in context, achieving downstream generalization without gradient updates but with a few in-context examples. Despite the encouraging empirical success, the und
Externí odkaz:
http://arxiv.org/abs/2402.02212
Fine-tuning pretrained language models (PLMs) for downstream tasks is a large-scale optimization problem, in which the choice of the training algorithm critically determines how well the trained model can generalize to unseen test data, especially in
Externí odkaz:
http://arxiv.org/abs/2310.17588
Autor:
Johnson, Kristen Holmstrom
Infinite dimensional systems such as flexible airplane wings and Vertical Axis Wind Turbine (VAWT) blades may require control to improve performance. Traditional control techniques use position and velocity information feedback, but velocity informat
Externí odkaz:
http://hdl.handle.net/1969.1/ETD-TAMU-2010-08-8491