Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Fergadiotis, Manos"'
Non-hierarchical sparse attention Transformer-based models, such as Longformer and Big Bird, are popular approaches to working with long documents. There are clear benefits to these approaches compared to the original Transformer in terms of efficien
Externí odkaz:
http://arxiv.org/abs/2210.05529
Autor:
Loukas, Lefteris, Fergadiotis, Manos, Chalkidis, Ilias, Spyropoulou, Eirini, Malakasiotis, Prodromos, Androutsopoulos, Ion, Paliouras, Georgios
Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language (XBRL) word-level tags. Manually tagging the reports is tedious and costly. We, therefore, introduce XBRL tagging as a new entity extraction
Externí odkaz:
http://arxiv.org/abs/2203.06482
Autor:
Loukas, Lefteris, Bougiatiotis, Konstantinos, Fergadiotis, Manos, Mavroeidis, Dimitris, Zavitsanos, Elias
Publikováno v:
In Proceedings of the Third Workshop on Financial Technology and Natural Language Processing (FinNLP 2021)
We present the submission of team DICoE for FinSim-3, the 3rd Shared Task on Learning Semantic Similarities for the Financial Domain. The task provides a set of terms in the financial domain and requires to classify them into the most relevant hypern
Externí odkaz:
http://arxiv.org/abs/2109.14906
We release EDGAR-CORPUS, a novel corpus comprising annual reports from all the publicly traded companies in the US spanning a period of more than 25 years. To the best of our knowledge, EDGAR-CORPUS is the largest financial NLP corpus available to da
Externí odkaz:
http://arxiv.org/abs/2109.14394
We introduce MULTI-EURLEX, a new multilingual dataset for topic classification of legal documents. The dataset comprises 65k European Union (EU) laws, officially translated in 23 languages, annotated with multiple labels from the EUROVOC taxonomy. We
Externí odkaz:
http://arxiv.org/abs/2109.00904
Autor:
Chalkidis, Ilias, Fergadiotis, Manos, Tsarapatsanis, Dimitrios, Aletras, Nikolaos, Androutsopoulos, Ion, Malakasiotis, Prodromos
Interpretability or explainability is an emerging research field in NLP. From a user-centric point of view, the goal is to build models that provide proper justification for their decisions, similar to those of humans, by requiring the models to sati
Externí odkaz:
http://arxiv.org/abs/2103.13084
Autor:
Chalkidis, Ilias, Fergadiotis, Manos, Manginas, Nikolaos, Katakalou, Eva, Malakasiotis, Prodromos
Major scandals in corporate history have urged the need for regulatory compliance, where organizations need to ensure that their controls (processes) comply with relevant laws, regulations, and policies. However, keeping track of the constantly chang
Externí odkaz:
http://arxiv.org/abs/2101.10726
Publikováno v:
updated version of the paper presented at Document Intelligence Workshop (NeurIPS 2019 Workshop)
We investigate contract element extraction. We show that LSTM-based encoders perform better than dilated CNNs, Transformers, and BERT in this task. We also find that domain-specific WORD2VEC embeddings outperform generic pre-trained GLOVE embeddings.
Externí odkaz:
http://arxiv.org/abs/2101.04355
Autor:
Katsafados, Apostolos G., Leledakis, George N., Pyrgiotakis, Emmanouil G., Androutsopoulos, Ion, Fergadiotis, Manos
Publikováno v:
In European Journal of Operational Research 16 January 2024 312(2):783-797
Autor:
Chalkidis, Ilias, Fergadiotis, Manos, Malakasiotis, Prodromos, Aletras, Nikolaos, Androutsopoulos, Ion
BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT
Externí odkaz:
http://arxiv.org/abs/2010.02559