Zobrazeno 1 - 10
of 251
pro vyhledávání: '"NIVRE, JOAKIM"'
Autor:
Weissweiler, Leonie, Böbel, Nina, Guiller, Kirian, Herrera, Santiago, Scivetti, Wesley, Lorenzi, Arthur, Melnik, Nurit, Bhatia, Archna, Schütze, Hinrich, Levin, Lori, Zeldes, Amir, Nivre, Joakim, Croft, William, Schneider, Nathan
The Universal Dependencies (UD) project has created an invaluable collection of treebanks with contributions in over 140 languages. However, the UD annotations do not tell the full story. Grammatical constructions that convey meaning through a partic
Externí odkaz:
http://arxiv.org/abs/2403.17748
The recent increase in data and model scale for language model pre-training has led to huge training costs. In scenarios where new data become available over time, updating a model instead of fully retraining it would therefore provide significant ga
Externí odkaz:
http://arxiv.org/abs/2311.01200
Autor:
Kulmizev, Artur, Nivre, Joakim
In the last half-decade, the field of natural language processing (NLP) has undergone two major transitions: the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime (pre-train, then fine-tune). Ami
Externí odkaz:
http://arxiv.org/abs/2110.08887
In this paper, we evaluate the translation of negation both automatically and manually, in English--German (EN--DE) and English--Chinese (EN--ZH). We show that the ability of neural machine translation (NMT) models to translate negation has improved
Externí odkaz:
http://arxiv.org/abs/2107.12203
Autor:
Basirat, Ali, Nivre, Joakim
Standard models for syntactic dependency parsing take words to be the elementary units that enter into dependency relations. In this paper, we investigate whether there are any benefits from enriching these models with the more abstract notion of nuc
Externí odkaz:
http://arxiv.org/abs/2101.11959
Publikováno v:
EACL 2021
Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively
Externí odkaz:
http://arxiv.org/abs/2101.10927
Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigatio
Externí odkaz:
http://arxiv.org/abs/2011.03469
Autor:
Basirat, Ali, Nivre, Joakim
We study the effect of rich supertag features in greedy transition-based dependency parsing. While previous studies have shown that sparse boolean features representing the 1-best supertag of a word can improve parsing accuracy, we show that we can g
Externí odkaz:
http://arxiv.org/abs/2007.04686
We generalize principal component analysis for embedding words into a vector space. The generalization is made in two major levels. The first is to generalize the concept of the corpus as a counting process which is defined by three key elements voca
Externí odkaz:
http://arxiv.org/abs/2007.04629
We present K{\o}psala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020. Our system is a pipeline consisting of off-the-shelf models for everything but enhanced graph parsing, and for the latter, a transi
Externí odkaz:
http://arxiv.org/abs/2005.12094