Zobrazeno 1 - 10
of 108
pro vyhledávání: '"Linzen, T."'
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Fernández, R., Linzen, T.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=narcis______::e00324b9d83bc9097b211bffe185e7fe
https://dare.uva.nl/personal/pure/en/publications/the-24th-conference-on-computational-natural-language-learning-connl(93f1d03c-fcfb-46db-a3d2-6d5672050c90).html
https://dare.uva.nl/personal/pure/en/publications/the-24th-conference-on-computational-natural-language-learning-connl(93f1d03c-fcfb-46db-a3d2-6d5672050c90).html
Autor:
van Schijndel M, Linzen T
The disambiguation of a syntactically ambiguous sentence in favor of dispreferred parse can lead to slower reading at the disambiguation point. This phenomenon, referred to as a garden path effect, has motivated models in which readers only maintain
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a5dbf0453affd08cc50eeda1f6b34de0
https://psyarxiv.com/7j8d6
https://psyarxiv.com/7j8d6
Autor:
Baan, J., Leible, J., Nikolaus, M., Rau, D., Ulmer, D., Baumgärtner, T., Hupkes, D., Bruni, E., Linzen, T., Chrupała, G., Belinkov, Y.
Publikováno v:
BlackboxNLP@ACL
The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019: ACL 2019 : proceedings of the Second Workshop : August 1, 2019, Florence, Italy, 127-137
STARTPAGE=127;ENDPAGE=137;TITLE=The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019
The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019: ACL 2019 : proceedings of the Second Workshop : August 1, 2019, Florence, Italy, 127-137
STARTPAGE=127;ENDPAGE=137;TITLE=The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019
We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task. The models are architecturally identical at inference time, but differ in the way that they are trained: our baseline model is train
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::acbe70d920f8c45f7f88d5ec57bf0ca9
Publikováno v:
BlackboxNLP@ACL
The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019: ACL 2019 : proceedings of the Second Workshop : August 1, 2019, Florence, Italy, 1-11
STARTPAGE=1;ENDPAGE=11;TITLE=The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019
The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019: ACL 2019 : proceedings of the Second Workshop : August 1, 2019, Florence, Italy, 1-11
STARTPAGE=1;ENDPAGE=11;TITLE=The BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP at ACL 2019
While sequence-to-sequence models have shown remarkable generalization power across several natural language tasks, their construct of solutions are argued to be less compositional than human-like generalization. In this paper, we present seq2attn, a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3dfd3394d444cd6dd2e2371b1892f989
Autor:
Testoni, A., Pezzelle, S., Bernardi, R., Chersoni, E., Jacobs, C., Lenci, A., Linzen, T., Prévot, L., Santus, E.
Publikováno v:
Cognitive Modeling and Computational Linguistics: NAACL HLT 2019 : proceedings of the workshop : June 7, 2019, Minneapolis, USA, 105-116
STARTPAGE=105;ENDPAGE=116;TITLE=Cognitive Modeling and Computational Linguistics
STARTPAGE=105;ENDPAGE=116;TITLE=Cognitive Modeling and Computational Linguistics
Inspired by the literature on multisensory integration, we develop a computational model to ground quantifiers in perception. The model learns to pick, out of nine quantifiers (‘few’, ‘many’, ‘all’, etc.), the one that is more likely to d
Autor:
Giulianelli, M., Harding, J., Mohnert, F., Hupkes, D., Zuidema, W., Linzen, T., Chrupała, G., Alishahi, A.
Publikováno v:
The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP: EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium
The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
BlackboxNLP@EMNLP
The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
BlackboxNLP@EMNLP
How do neural language models keep track of number agreement between subject and verb? We show that `diagnostic classifiers', trained to predict number from the internal states of a language model, provide a detailed understanding of how, when, and w
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b3a8352088ef7bf3eb78f74987208539
http://arxiv.org/abs/1808.08079
http://arxiv.org/abs/1808.08079
Publikováno v:
The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP: EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium, 222-231
STARTPAGE=222;ENDPAGE=231;TITLE=The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
BlackboxNLP@EMNLP
STARTPAGE=222;ENDPAGE=231;TITLE=The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
BlackboxNLP@EMNLP
In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ad16e85cd398d57858c37a10f13567d4
https://doi.org/10.18653/v1/w18-5424
https://doi.org/10.18653/v1/w18-5424
Autor:
Bastings, J., Baroni, M., Weston, J., Cho, K., Kiela, D., Linzen, T., Chrupała, G., Alishahi, A.
Publikováno v:
BlackboxNLP@EMNLP
The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP: EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium, 47-55
STARTPAGE=47;ENDPAGE=55;TITLE=The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP: EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium, 47-55
STARTPAGE=47;ENDPAGE=55;TITLE=The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Lake & Baroni (2018) recently introduced the SCAN data set, which consists of simple commands paired with action sequences and is intended to test the strong generalization abilities of recurrent sequence-to-sequence models. Their initial experiments