Zobrazeno 1 - 10
of 96
pro vyhledávání: '"Ines Rehbein"'
Publikováno v:
Journal for Language Technology and Computational Linguistics. 35:iii-v
Publikováno v:
ACM/IMS Transactions on Data Science. 2:1-27
During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by
Autor:
Ines Rehbein, Simone Paolo Ponzetto, Anna Adendorf, Oke Bahnsen, Lukas Stoetzer, Heiner Stuckenschmidt
Publikováno v:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Autor:
Manuela Sanguinetti, Cristina Bosco, Lauren Cassidy, Özlem Çetinoğlu, Alessandra Teresa Cignarella, Teresa Lynn, Ines Rehbein, Josef Ruppenhofer, Djamé Seddah, Amir Zeldes
This article presents a discussion on the main linguistic phenomena which cause difficulties in the analysis of user-generated texts found on the web and in social media, and proposes a set of annotation guidelines for their treatment within the Univ
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::79d70d50a75432cf907c73f5a3b27c0f
http://arxiv.org/abs/2011.02063
http://arxiv.org/abs/2011.02063
Autor:
Bich-Ngoc Do, Ines Rehbein
Publikováno v:
ACL
Recent work has shown that neural rerankers can improve results for dependency parsing over the top k trees produced by a base parser. However, all neural rerankers so far have been evaluated on English and Chinese only, both languages with a configu
Autor:
Sanguinetti, Manuela, Bosco, Cristina, Lauren, Cassidy, Ozlem, Cetinoglu, Cignarella, ALESSANDRA TERESA, Teresa, Lynn, Ines, Rehbein, Josef, Ruppenhofer, Djamé, Seddah, Amir, Zeldes
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______970::d8c46b6cc315c965e815bab8ed0e3d3f
http://hdl.handle.net/2318/1739633
http://hdl.handle.net/2318/1739633
Autor:
Ines Rehbein
Publikováno v:
LAW@ACL
This paper investigates the use of explicitly signalled discourse relations in persuasive texts. We present a corpus study where we control for speaker and topic and show that the distribution of different discourse connectives varies considerably ac
Autor:
Ines Rehbein, Raphael Schumann
Publikováno v:
CoNLL
Active learning (AL) is a technique for reducing manual annotation effort during the annotation of training data for machine learning classifiers. For NLP tasks, pool-based and stream-based sampling techniques have been used to select new instances f
Publikováno v:
Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019).
Autor:
Ines Rehbein, Uli Steinbach
Publikováno v:
LaTeCH@NAACL-HLT
This paper presents a modular NLP pipeline for the creation of a parallel literature corpus, followed by annotation transfer from the source to the target language. The test case we use to evaluate our pipeline is the automatic transfer of quote and