Zobrazeno 1 - 10
of 16 238
pro vyhledávání: '"A. Stengel"'
Autor:
T. Friedrich, A. Stengel
Publikováno v:
Frontiers in Pharmacology, Vol 14 (2023)
Phoenixin is a pleiotropic peptide, whose known functions have broadened significantly over the last decade. Initially first described as a reproductive peptide in 2013, phoenixin is now recognized as being implicated in hypertension, neuroinflammati
Externí odkaz:
https://doaj.org/article/8225072319fd48fb88b8bff040497aec
Large language models (LLMs) are susceptible to persuasion, which can pose risks when models are faced with an adversarial interlocutor. We take a first step towards defending models against persuasion while also arguing that defense against adversar
Externí odkaz:
http://arxiv.org/abs/2410.14596
Autor:
Hallman, K., Stengel, S., Jaffray, W., Belli, F., Ferrera, M., Vincenti, M. A., de Ceglia, D., Kivshar, Y., Akozbek, N., Mukhopadhyay, S., Trull, J., Cojocaru, C., Scalora, M.
Recent years have witnessed significant developments in the study of nonlinear properties of various optical materials at the nanoscale. However, in most cases experimental results on harmonic generation from nanostructured materials are reported wit
Externí odkaz:
http://arxiv.org/abs/2410.12088
A Condorcet winning set is a set of candidates such that no other candidate is preferred by at least half the voters over all members of the set. The Condorcet dimension, which is the minimum cardinality of a Condorcet winning set, is known to be at
Externí odkaz:
http://arxiv.org/abs/2410.09201
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using LLMs as annotators reduce human effort, but
Externí odkaz:
http://arxiv.org/abs/2410.06215
Reward Models (RMs) play a crucial role in aligning LLMs with human preferences, enhancing their performance by ranking outputs during inference or iterative training. However, the degree to which an RM generalizes to new tasks is often not known a p
Externí odkaz:
http://arxiv.org/abs/2410.01735
Autor:
Chen, Justin Chih-Yao, Prasad, Archiki, Saha, Swarnadeep, Stengel-Eskin, Elias, Bansal, Mohit
Large Language Models' (LLM) reasoning can be improved using test-time aggregation strategies, i.e., generating multiple samples and voting among generated samples. While these improve performance, they often reach a saturation point. Refinement offe
Externí odkaz:
http://arxiv.org/abs/2409.12147
Knowledge conflict arises from discrepancies between information in the context of a large language model (LLM) and the knowledge stored in its parameters. This can hurt performance when using standard decoding techniques, which tend to ignore the co
Externí odkaz:
http://arxiv.org/abs/2409.07394
We establish rigorous inequalities between different electronic properties linked to optical sum rules, and organize them into weak and strong bounds on three characteristic properties of insulators: electron localization length $\ell$ (the quantum f
Externí odkaz:
http://arxiv.org/abs/2407.17908
Autor:
Saha, Swarnadeep, Prasad, Archiki, Chen, Justin Chih-Yao, Hase, Peter, Stengel-Eskin, Elias, Bansal, Mohit
Language models can be used to solve long-horizon planning problems in two distinct modes: a fast 'System-1' mode, directly generating plans without any explicit search or backtracking, and a slow 'System-2' mode, planning step-by-step by explicitly
Externí odkaz:
http://arxiv.org/abs/2407.14414