Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Zhang, Michael J. Q."'
Large language models (LLMs) must often respond to highly ambiguous user requests. In such cases, the LLM's best response may be to ask a clarifying question to elicit more information. We observe existing LLMs often respond by presupposing a single
Externí odkaz:
http://arxiv.org/abs/2410.13788
Autor:
Zhang, Michael J. Q., Choi, Eunsol
Resolving ambiguities through interaction is a hallmark of natural language, and modeling this behavior is a core challenge in crafting AI assistants. In this work, we study such behavior in LMs by proposing a task-agnostic framework for resolving am
Externí odkaz:
http://arxiv.org/abs/2311.09469
Modern language models have the capacity to store and use immense amounts of knowledge about real-world entities, but it remains unclear how to update such knowledge stored in model parameters. While prior methods for updating knowledge in LMs succes
Externí odkaz:
http://arxiv.org/abs/2306.09306
Autor:
Zhang, Michael J. Q., Choi, Eunsol
While large language models are able to retain vast amounts of world knowledge seen during pretraining, such knowledge is prone to going out of date and is nontrivial to update. Furthermore, these models are often used under temporal misalignment, ta
Externí odkaz:
http://arxiv.org/abs/2305.14824
Autor:
Cole, Jeremy R., Zhang, Michael J. Q., Gillick, Daniel, Eisenschlos, Julian Martin, Dhingra, Bhuwan, Eisenstein, Jacob
Trustworthy language models should abstain from answering questions when they do not know the answer. However, the answer to a question can be unknown for a variety of reasons. Prior research has focused on the case in which the question is clear and
Externí odkaz:
http://arxiv.org/abs/2305.14613
Pre-trained language models (LMs) are used for knowledge intensive tasks like question answering, but their knowledge gets continuously outdated as the world changes. Prior work has studied targeted updates to LMs, injecting individual facts and eval
Externí odkaz:
http://arxiv.org/abs/2305.01651
Autor:
Cole, Jeremy R., Jain, Palak, Eisenschlos, Julian Martin, Zhang, Michael J. Q., Choi, Eunsol, Dhingra, Bhuwan
Identifying the difference between two versions of the same article is useful to update knowledge bases and to understand how articles evolve. Paired texts occur naturally in diverse situations: reporters write similar news stories and maintainers of
Externí odkaz:
http://arxiv.org/abs/2303.00242
Question answering models can use rich knowledge sources -- up to one hundred retrieved passages and parametric knowledge in the large-scale language model (LM). Prior work assumes information in such knowledge sources is consistent with each other,
Externí odkaz:
http://arxiv.org/abs/2210.13701
Language models (LMs) are typically trained once on a large-scale corpus and used for years without being updated. However, in a dynamic world, new entities constantly arise. We propose a framework to analyze what LMs can infer about new entities tha
Externí odkaz:
http://arxiv.org/abs/2205.02832
Autor:
Zhang, Michael J. Q., Choi, Eunsol
Answers to the same question may change depending on the extra-linguistic contexts (when and where the question was asked). To study this challenge, we introduce SituatedQA, an open-retrieval QA dataset where systems must produce the correct answer t
Externí odkaz:
http://arxiv.org/abs/2109.06157