Zobrazeno 1 - 10
of 116
pro vyhledávání: '"Luke Zettlemoyer"'
Autor:
Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 11, Pp 600-616 (2023)
AbstractWe introduce ART, a new corpus-level autoencoding approach for training dense retrieval models that does not require any labeled training data. Dense retrieval is a central challenge for open-domain tasks, such as Open QA, where state-of-the-
Externí odkaz:
https://doaj.org/article/1876b586e8df46f39c701449c9e93aa5
Autor:
Xian Li, Yinhan Liu, Jiatao Gu, Luke Zettlemoyer, Sergey Edunov, Michael Lewis, Naman Goyal, Marjan Ghazvininejad
Publikováno v:
Transactions of the Association for Computational Linguistics. 8:726-742
This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART -- a sequence-to-sequence denoising auto-encoder pre-trained on large-sc
Autor:
Hongjin SU, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Smith, Noah A., Tao Yu
Publikováno v:
Hongjin SU
Many recent approaches to natural language tasks are built on the remarkable abilities of large language models. Large language models can perform in-context learning, where they learn a new task from a few task demonstrations, without any parameter
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e7b6e00a66a5367211bb3b9452cde0dd
Autor:
Hongjin SU, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Smith, Noah A., Luke Zettlemoyer, Tao Yu
Publikováno v:
Hongjin SU
We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that ar
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cfe85cf00e250927a133ff061da7fb26
Autor:
Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks. This meta-training enables the mo
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2ed567a0ccf0c8d33fe2d7094a2e1c7b
http://arxiv.org/abs/2110.15943
http://arxiv.org/abs/2110.15943
Autor:
Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer
We present VideoCLIP, a contrastive approach to pre-train a unified model for zero-shot video and text understanding, without using any labels on downstream tasks. VideoCLIP trains a transformer for video and text by contrasting temporally overlappin
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::65db75a878a131a53042aa5e86727251
http://arxiv.org/abs/2109.14084
http://arxiv.org/abs/2109.14084
Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims. Existing claims are either authored by crowdworkers, thereby int
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4c69d2aac47404dcd611a7e71ac902d9
http://arxiv.org/abs/2107.02153
http://arxiv.org/abs/2107.02153
Autor:
Bhargavi Paranjape, Luke Zettlemoyer, Hannaneh Hajishirzi, Marjan Ghazvininejad, Julian Michael
Publikováno v:
ACL/IJCNLP (Findings)
Many commonsense reasoning NLP tasks involve choosing between one or more possible answers to a question or prompt based on knowledge that is often implicit. Large pretrained language models (PLMs) can achieve near-human performance on such tasks, wh
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8e9b095b581034bda025e31b086c4074
http://arxiv.org/abs/2106.06823
http://arxiv.org/abs/2106.06823
Autor:
Florian Metze, Prahal Arora, Luke Zettlemoyer, Gargi Ghosh, Hu Xu, Po-Yao Huang, Christoph Feichtenhofer, Masoumeh Aminzadeh
Publikováno v:
ACL/IJCNLP (Findings)
We present a simplified, task-agnostic multi-modal pre-training approach that can accept either video or text input, or both for a variety of end tasks. Existing pre-training are task-specific by adopting either a single cross-modal encoder that requ
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b7251a0fe79f15d5d5f1c32029890b5f
http://arxiv.org/abs/2105.09996
http://arxiv.org/abs/2105.09996