Zobrazeno 1 - 10
of 16
pro vyhledávání: '"R. Thomas McCoy"'
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 11 (2023)
Externí odkaz:
https://doaj.org/article/fcac0cb3289c4654a57733705ef21320
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
ACL
Sequence-based neural networks show significant sensitivity to syntactic structure, but they still perform less well on syntactic tasks than tree-based networks. Such tree-based networks can be provided with a constituency parse, a dependency parse,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::199516cc5ee9baa7d0f70a6750ffee18
http://arxiv.org/abs/2005.00019
http://arxiv.org/abs/2005.00019
Publikováno v:
ACL
Pretrained neural models such as BERT, when fine-tuned to perform natural language inference (NLI), often show high accuracy on standard datasets, but display a surprising lack of sensitivity to word order on controlled challenge sets. We hypothesize
Publikováno v:
BlackboxNLP@EMNLP
If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural Language Infere
Autor:
Michael A. Lepori, R. Thomas McCoy
Publikováno v:
COLING
As the name implies, contextualized representations of language are typically motivated by their ability to encode context. Which aspects of context are captured by such representations? We introduce an approach to address this question using Represe
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 8, Pp 125-140 (2020)
Learners that are exposed to the same training data might generalize differently due to differing inductive biases. In neural network models, inductive biases could in theory arise from any aspect of the model architecture. We investigate which archi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7eeaacd73a8b49beacb27bf450dfa531
Publikováno v:
ACL (1)
A machine learning system can score well on a given test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of det
Publikováno v:
EMNLP
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Conference on Empirical Methods in Natural Language Processing
Conference on Empirical Methods in Natural Language Processing, Sep 2017, Copenhague, Denmark. pp.1712-1722
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Conference on Empirical Methods in Natural Language Processing
Conference on Empirical Methods in Natural Language Processing, Sep 2017, Copenhague, Denmark. pp.1712-1722
International audience; We present supertagging-based models for Tree Adjoining Grammar parsing that use neural network architectures and dense vector representation of supertags (elementary trees) to achieve state-of-the-art performance in unlabeled
Autor:
Benjamin Van Durme, Ellie Pavlick, R. Thomas McCoy, Raghavendra Pappagari, Patrick Xia, Najoung Kim, Yinghui Huang, Katherin Yu, Roma Patel, Jan Hula, Edouard Grave, Shuning Jin, Ian Tenney, Samuel R. Bowman, Berlin Chen, Alex Wang
Publikováno v:
Scopus-Elsevier
ACL (1)
ACL (1)
Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling. We conduct the first large-s
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::08523946800da4a7d663cf8e3afe49f7
http://www.scopus.com/inward/record.url?eid=2-s2.0-85084066669&partnerID=MN8TOARS
http://www.scopus.com/inward/record.url?eid=2-s2.0-85084066669&partnerID=MN8TOARS