Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Alexandra Uma"'
Publikováno v:
Frontiers in Artificial Intelligence, Vol 5 (2022)
Crowdsourced data are often rife with disagreement, either because of genuine item ambiguity, overlapping labels, subjectivity, or annotator error. Hence, a variety of methods have been developed for learning from data containing disagreement. One of
Externí odkaz:
https://doaj.org/article/e87afaa098da4f31a748312dc42dbc2f
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
Publikováno v:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Fornaciari, T, Uma, A, Paun, S, Plank, B, Hovy, D & Poesio, M 2021, Beyond Black & White: Leveraging Annotator Disagreement via Soft-Label Multi-Task Learning . in Proceedings of NAACL . Association for Computational Linguistics, pp. 2591–2597 .
NAACL-HLT
Fornaciari, T, Uma, A, Paun, S, Plank, B, Hovy, D & Poesio, M 2021, Beyond Black & White: Leveraging Annotator Disagreement via Soft-Label Multi-Task Learning . in Proceedings of NAACL . Association for Computational Linguistics, pp. 2591–2597 .
NAACL-HLT
Supervised learning assumes that a ground truth label exists. However, the reliability of this ground truth depends on human annotators, who often disagree. Prior work has shown that this disagreement can be helpful in training models. We propose a n
Autor:
Jon Chamberlain, Tommaso Fornaciari, Massimo Poesio, Barbara Plank, Anca Dumitrache, Alexandra Uma, Edwin Simpson, Tristan Miller
Publikováno v:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
SemEval@ACL/IJCNLP
SemEval@ACL/IJCNLP
Disagreement between coders is ubiquitous in virtually all datasets annotated with human judgements in both natural language processing and computer vision. However, most supervised machine learning methods assume that a single preferred interpretati
Autor:
Silviu Paun, Dirk Hovy, Tommaso Fornaciari, Alexandra Uma, Barbara Plank, Michael Fell, Massimo Poesio, Valerio Basile
Publikováno v:
Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future
Basile, V, Fell, M, Fornaciari, T, Hovy, D, Paun, S, Plank, B, Poesio, M & Uma, A 2021, We Need to Consider Disagreement in Evaluation . in ACL-IJCNLP2021 Workshop on Benchmarking: Past, Present and Future . Association for Computational Linguistics, pp. 15-21 . https://doi.org/10.18653/v1/2021.bppf-1.3
Basile, V, Fell, M, Fornaciari, T, Hovy, D, Paun, S, Plank, B, Poesio, M & Uma, A 2021, We Need to Consider Disagreement in Evaluation . in ACL-IJCNLP2021 Workshop on Benchmarking: Past, Present and Future . Association for Computational Linguistics, pp. 15-21 . https://doi.org/10.18653/v1/2021.bppf-1.3
Evaluation is of paramount importance in data- driven research fields such as Natural Language Processing (NLP) and Computer Vision (CV). But current evaluation practice in NLP, except for end-to-end tasks such as machine translation, spoken dialogue
Publikováno v:
NAACL-HLT (1)
Proceedings of the 2019 Conference of the North
Proceedings of the 2019 Conference of the North
We present a corpus of anaphoric information (coreference) crowdsourced through a game-with-a-purpose. The corpus, containing annotations for about 108,000 markables, is one of the largest corpora for coreference for English, and one of the largest c
Autor:
Heike Zinsmeister, Juntao Yu, Yulia Grishina, Fabian Simonjetz, Nafise Sadat Moosavi, Olga Uryupina, Adam Roussel, Varada Kolhatkar, Alexandra Uma, Massimo Poesio, Ina Roesiger
Publikováno v:
Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference
The ARRAU corpus is an anaphorically annotated corpus of English providing rich linguistic information about anaphora resolution. The most distinctive feature of the corpus is the annotation of a wide range of anaphoric relations, including bridging