Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Ian Tenney"'
Deep NLP models have been shown to learn spurious correlations, leaving them brittle to input perturbations. Recent work has shown that counterfactual or contrastive data -- i.e. minimally perturbed inputs -- can reveal these weaknesses, and that dat
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4ae5c4412752579db12fe089c72d38ad
Autor:
Ann Yuan, Ellen Jiang, Ian Tenney, Sebastian Gehrmann, Andy Coenen, Carey Radebaugh, Mahima Pushkarna, Tolga Bolukbasi, Emily Reif, James Wexler, Jasmijn Bastings
Publikováno v:
EMNLP (Demos)
We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b5faca0b1a31434dc1de6cad0462d94d
http://arxiv.org/abs/2008.05122
http://arxiv.org/abs/2008.05122
Publikováno v:
BlackboxNLP@EMNLP
While there has been much recent work studying how linguistic information is encoded in pre-trained sentence representations, comparatively little is understood about how these models change when adapted to solve downstream tasks. Using a suite of an
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::449a89805ef74909e0ffd030c284ae55
http://arxiv.org/abs/2004.14448
http://arxiv.org/abs/2004.14448
Autor:
Yada Pruksachatkun, Ian Tenney, Haokun Liu, Philip Yeres, Samuel R. Bowman, Jason Phang, Alex Wang, Phu Mon Htut
Publikováno v:
ACL (demo)
We introduce jiant, an open source toolkit for conducting multitask and transfer learning experiments on English NLU tasks. jiant enables modular and configuration-driven experimentation with state-of-the-art models and implements a broad set of task
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a142207d49aa77bc9befc3b26a7ca2ef
Publikováno v:
EMNLP (1)
The success of pretrained contextual encoders, such as ELMo and BERT, has brought a great deal of interest in what these models learn: do they, without explicit supervision, learn to encode meaningful notions of linguistic structure? If so, how is th
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::dcb58854d237f32d116b7f1f17f6bb54
Publikováno v:
Scopus-Elsevier
BlackboxNLP@EMNLP
BlackboxNLP@EMNLP
Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We sho
Autor:
Patrick Xia, Benjamin Van Durme, Najoung Kim, Ellie Pavlick, Tal Linzen, Alexis Ross, Samuel R. Bowman, Ian Tenney, Adam Poliak, Alex Wang, Roma Patel, Thomas H. McCoy
Publikováno v:
SEM@NAACL-HLT
We introduce a set of nine challenge tasks that test for the understanding of function words. These tasks are created by structurally mutating sentences from existing datasets to target the comprehension of specific types of function words (e.g., pre
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1b37a420dc5cb823a1a7efdb6f2e2a0e
http://arxiv.org/abs/1904.11544
http://arxiv.org/abs/1904.11544
Publikováno v:
ACL (1)
Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured within the network. We find that the model represents the steps of
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c21cb307d0baf2ba22981e034a26cd04
Publikováno v:
EMNLP
We release a corpus of 43 million atomic edits across 8 languages. These edits are mined from Wikipedia edit history and consist of instances in which a human editor has inserted a single contiguous phrase into, or deleted a single contiguous phrase
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::150efd0de75cf15088401980f09720bb
http://arxiv.org/abs/1808.09422
http://arxiv.org/abs/1808.09422
Publikováno v:
Physical Review A. 93