Zobrazeno 1 - 10
of 11
pro vyhledávání: '"Alex Warstadt"'
Autor:
Alex Warstadt, Samuel R. Bowman
Publikováno v:
Algebraic Structures in Natural Language ISBN: 9781003205388
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::e923d3fd86c185f132a10ca66083dcbe
https://doi.org/10.1201/9781003205388-2
https://doi.org/10.1201/9781003205388-2
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 7, Pp 625-641 (2019)
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657
Publikováno v:
EACL
Linguistically informed analyses of language models (LMs) contribute to the understanding and improvement of these models. Here, we introduce the corpus of Chinese linguistic minimal pairs (CLiMP), which can be used to investigate what knowledge Chin
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::38db4c1fd3c26839d43fffde976eaad2
Publikováno v:
ACL/IJCNLP (1)
Crowdsourcing is widely used to create data for common natural language understanding tasks. Despite the importance of these datasets for measuring and refining model understanding of language, there has been little focus on the crowdsourcing methods
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::254b66cdfa6efafcdc8d62aa514839b8
Autor:
Alicia Parrish, Sebastian Schuster, Alex Warstadt, Omar Agha, Soo-Hwan Lee, Zhuoye Zhao, Samuel R. Bowman, Tal Linzen
Publikováno v:
Proceedings of the 25th Conference on Computational Natural Language Learning.
Understanding language requires grasping not only the overtly stated content, but also making inferences about things that were left unsaid. These inferences include presuppositions, a phenomenon by which a listener learns about new information throu
Publikováno v:
EMNLP (1)
One reason pretraining on self-supervised linguistic tasks is effective is that it teaches models features that are helpful for language understanding. However, we want pretrained models to learn not only to represent linguistic features, but also to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ebda02d481295b1364761e8fd1741c08
http://arxiv.org/abs/2010.05358
http://arxiv.org/abs/2010.05358
Publikováno v:
ACL
Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudie
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c95c4b380bfcd4c4da355c4643b50f9c
Publikováno v:
ACL/IJCNLP (1)
NLP is currently dominated by general-purpose pretrained language models like RoBERTa, which achieve strong performance on NLU tasks through pretraining on billions of words. But what exact knowledge or skills do Transformer LMs learn from large-scal
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::aafc73aae70819f5c350586520507193
Autor:
Samuel R. Bowman, Haokun Liu, Alicia Parrish, Anhad Mohananey, Alex Warstadt, Sheng-Fu Wang, Wei Peng
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 8, Pp 377-392 (2020)
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP), a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::85d6b23553cbce6733dc34c4dfae0173
http://arxiv.org/abs/1912.00582
http://arxiv.org/abs/1912.00582
Autor:
Wei Peng, Samuel R. Bowman, Haokun Liu, Alicia Parrish, Anhad Mohananey, Alex Warstadt, Sheng-Fu Wang
Publikováno v:
Transactions of the Association for Computational Linguistics. 8:867-868
We correct wrongly reported results on BLiMP.