Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Chris Brockett"'
Autor:
Ryan Volum, Sudha Rao, Michael Xu, Gabriel DesGarennes, Chris Brockett, Benjamin Van Durme, Olivia Deng, Akanksha Malhotra, Bill Dolan
Publikováno v:
Proceedings of the 3rd Wordplay: When Language Meets Games Workshop (Wordplay 2022).
Publikováno v:
ACL/IJCNLP (Findings)
The advent of large pre-trained language models has made it possible to make high-quality predictions on how to add or change a sentence in a document. However, the high branching factor inherent to text generation impedes the ability of even the str
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4e7e5daba6b23ae0083ace15bf724722
http://arxiv.org/abs/2106.07192
http://arxiv.org/abs/2106.07192
Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications. Existing work usually attempts to detect these hallucinations based on a co
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::0905a324e07610518dbc6cd6fbfb91f1
http://arxiv.org/abs/2104.08704
http://arxiv.org/abs/2104.08704
Autor:
Yizhe Zhang, Siqi Sun, Xiang Gao, Yuwei Fang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan
Recent advances in large-scale pre-training such as GPT-3 allow seemingly high quality text to be generated from a given prompt. However, such generation systems often suffer from problems of hallucinated facts, and are not inherently designed to inc
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::416976196477cde44dcc6fdf9de125ba
Autor:
Chris Brockett, Felix Faltings, Michel Galley, Bill Dolan, Jianfeng Gao, Gerold Hintz, Chris Quirk
Publikováno v:
NAACL-HLT
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step. The one-shot setting is inadequate, however, when the constraints the user wishes to impose on the generated text are dynamic, especially
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b3ce113ae4a53b01ce0bd7a43d8a5a58
http://arxiv.org/abs/2010.12826
http://arxiv.org/abs/2010.12826
Publikováno v:
EMNLP (1)
Large-scale pre-trained language models, such as BERT and GPT-2, have achieved excellent performance in language representation learning and free-form text generation. However, these models cannot be directly employed to generate text under specified
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6c028409af1c6d8678444c172c50fefa
http://arxiv.org/abs/2005.00558
http://arxiv.org/abs/2005.00558
Autor:
Chris Brockett, Elnaz Nouri, Sudha Rao, Angela S. Lin, Asli Celikyilmaz, Debadeepta Dey, Bill Dolan
Publikováno v:
ACL
Many high-level procedural tasks can be decomposed into sequences of instructions that vary in their order and choice of tools. In the cooking domain, the web offers many partially-overlapping text and video recipes (i.e. procedures) that describe ho
Publikováno v:
EMNLP (1)
Existing open-domain dialog models are generally trained to minimize the perplexity of target human responses. However, some human replies are more engaging than others, spawning more followup interactions. Current conversational models are increasin
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::692d24b9e0b304fbe43f1a6886767ea3
Publikováno v:
EMNLP/IJCNLP (1)
Generating responses in a targeted style is a useful yet challenging task, especially in the absence of parallel data. With limited data, existing methods tend to generate responses that are either less stylized or less context-relevant. We propose S
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4f69a193785ce32c046f48c80a156127
http://arxiv.org/abs/1909.05361
http://arxiv.org/abs/1909.05361
Autor:
Xiang Gao, Chris Brockett, Lianhui Qin, Yejin Choi, Jianfeng Gao, Bill Dolan, Xiaodong Liu, Michel Galley
Publikováno v:
ACL (1)
Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous. We present a new end-to-end approach to contentful
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b5b9ce5b36fff159def5e996c0720e8a
http://arxiv.org/abs/1906.02738
http://arxiv.org/abs/1906.02738