Zobrazeno 1 - 10
of 10 955
pro vyhledávání: '"Sequence labeling"'
Various linearizations have been proposed to cast syntactic dependency parsing as sequence labeling. However, these approaches do not support more complex graph-based representations, such as semantic dependencies or enhanced universal dependencies,
Externí odkaz:
http://arxiv.org/abs/2410.17972
In recent years, biomedical event extraction has been dominated by complicated pipeline and joint methods, which need to be simplified. In addition, existing work has not effectively utilized trigger word information explicitly. Hence, we propose MLS
Externí odkaz:
http://arxiv.org/abs/2408.05545
Real world deployments of word alignment are almost certain to cover both high and low resource languages. However, the state-of-the-art for this task recommends a different model class depending on the availability of gold alignment training data fo
Externí odkaz:
http://arxiv.org/abs/2407.12881
Chinese sequence labeling tasks are heavily reliant on accurate word boundary demarcation. Although current pre-trained language models (PLMs) have achieved substantial gains on these tasks, they rarely explicitly incorporate boundary information int
Externí odkaz:
http://arxiv.org/abs/2404.05560
Autor:
Tang, Xuemei, Su, Qi
Sequence labeling models often benefit from incorporating external knowledge. However, this practice introduces data heterogeneity and complicates the model with additional modules, leading to increased expenses for training a high-performing model.
Externí odkaz:
http://arxiv.org/abs/2402.13534
Autor:
Ma, Bolei, Nie, Ercong, Yuan, Shuzhou, Schmid, Helmut, Färber, Michael, Kreuter, Frauke, Schütze, Hinrich
Prompt-based methods have been successfully applied to multilingual pretrained language models for zero-shot cross-lingual understanding. However, most previous studies primarily focused on sentence-level classification tasks, and only a few consider
Externí odkaz:
http://arxiv.org/abs/2401.16589
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
The incremental sequence labeling task involves continuously learning new classes over time while retaining knowledge of the previous ones. Our investigation identifies two significant semantic shifts: E2O (where the model mislabels an old entity as
Externí odkaz:
http://arxiv.org/abs/2402.10447
Autor:
Dukić, David, Šnajder, Jan
Pre-trained language models based on masked language modeling (MLM) excel in natural language understanding (NLU) tasks. While fine-tuned MLM-based encoders consistently outperform causal language modeling decoders of comparable size, recent decoder-
Externí odkaz:
http://arxiv.org/abs/2401.14556
Unified Sequence Labeling that articulates different sequence labeling problems such as Named Entity Recognition, Relation Extraction, Semantic Role Labeling, etc. in a generalized sequence-to-sequence format opens up the opportunity to make the maxi
Externí odkaz:
http://arxiv.org/abs/2311.03748