Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Lin, Jessy"'
Users often make ambiguous requests that require clarification. We study the problem of asking clarification questions in an information retrieval setting, where systems often face ambiguous search queries and it is challenging to turn the uncertaint
Externí odkaz:
http://arxiv.org/abs/2405.15784
Autor:
Lin, Jessy, Du, Yuqing, Watkins, Olivia, Hafner, Danijar, Abbeel, Pieter, Klein, Dan, Dragan, Anca
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world. While current agents can learn to execute simple language instructions, we aim to build agents that lever
Externí odkaz:
http://arxiv.org/abs/2308.01399
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions. We formalize three domains
Externí odkaz:
http://arxiv.org/abs/2305.20076
Autor:
Carroll, Micah, Paradise, Orr, Lin, Jessy, Georgescu, Raluca, Sun, Mingfei, Bignell, David, Milani, Stephanie, Hofmann, Katja, Hausknecht, Matthew, Dragan, Anca, Devlin, Sam
Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision-making, where many
Externí odkaz:
http://arxiv.org/abs/2211.10869
We introduce translation error correction (TEC), the task of automatically correcting human-generated translations. Imperfections in machine translations (MT) have long motivated systems for improving translations post-hoc with automatic post-editing
Externí odkaz:
http://arxiv.org/abs/2206.08593
Autor:
Carroll, Micah, Lin, Jessy, Paradise, Orr, Georgescu, Raluca, Sun, Mingfei, Bignell, David, Milani, Stephanie, Hofmann, Katja, Hausknecht, Matthew, Dragan, Anca, Devlin, Sam
Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many
Externí odkaz:
http://arxiv.org/abs/2204.13326
Autor:
Fried, Daniel, Aghajanyan, Armen, Lin, Jessy, Wang, Sida, Wallace, Eric, Shi, Freda, Zhong, Ruiqi, Yih, Wen-tau, Zettlemoyer, Luke, Lewis, Mike
Code is seldom written in a single left-to-right pass and is instead repeatedly edited and refined. We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) as well as editing (via infilling).
Externí odkaz:
http://arxiv.org/abs/2204.05999
In classic instruction following, language like "I'd like the JetBlue flight" maps to actions (e.g., selecting that flight). However, language also conveys information about a user's underlying reward function (e.g., a general preference for JetBlue)
Externí odkaz:
http://arxiv.org/abs/2204.02515
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than
Externí odkaz:
http://arxiv.org/abs/1804.08598