Zobrazeno 1 - 10
of 2 983
pro vyhledávání: '"Hanbury, A"'
Autor:
Arzt, Varvara, Hanbury, Allan
This paper investigates the transparency in the creation of benchmarks and the use of leaderboards for measuring progress in NLP, with a focus on the relation extraction (RE) task. Existing RE benchmarks often suffer from insufficient documentation,
Externí odkaz:
http://arxiv.org/abs/2411.05224
Autor:
Pachinger, Pia, Goldzycher, Janis, Planitzer, Anna Maria, Kusa, Wojciech, Hanbury, Allan, Neidhardt, Julia
Model interpretability in toxicity detection greatly profits from token-level annotations. However, currently such annotations are only available in English. We introduce a dataset annotated for offensive language detection sourced from a news forum,
Externí odkaz:
http://arxiv.org/abs/2406.08080
Systematic literature reviews (SLRs) play an essential role in summarising, synthesising and validating scientific evidence. In recent years, there has been a growing interest in using machine learning techniques to automate the identification of rel
Externí odkaz:
http://arxiv.org/abs/2311.12474
Search methods based on Pretrained Language Models (PLM) have demonstrated great effectiveness gains compared to statistical and early neural ranking models. However, fine-tuning PLM-based rankers requires a great amount of annotated training data. A
Externí odkaz:
http://arxiv.org/abs/2309.06131
Keeping up with research and finding related work is still a time-consuming task for academics. Researchers sift through thousands of studies to identify a few relevant ones. Automation techniques can help by increasing the efficiency and effectivene
Externí odkaz:
http://arxiv.org/abs/2309.01684
Clinical trials (CTs) often fail due to inadequate patient recruitment. This paper tackles the challenges of CT retrieval by presenting an approach that addresses the patient-to-trials paradigm. Our approach involves two key components in a pipeline-
Externí odkaz:
http://arxiv.org/abs/2307.00381
Current methods of evaluating search strategies and automated citation screening for systematic literature reviews typically rely on counting the number of relevant and not relevant publications. This established practice, however, does not accuratel
Externí odkaz:
http://arxiv.org/abs/2306.17614
We discuss our experiments for COLIEE Task 1, a court case retrieval competition using cases from the Federal Court of Canada. During experiments on the training data we observe that passage level retrieval with rank fusion outperforms document level
Externí odkaz:
http://arxiv.org/abs/2304.08188
Publikováno v:
Nature Communications, Vol 15, Iss 1, Pp 1-12 (2024)
Abstract Researchers have argued that wealthy nations rely on a large net appropriation of labour and resources from the rest of the world through unequal exchange in international trade and global commodity chains. Here we assess this empirically by
Externí odkaz:
https://doaj.org/article/d93a13882e744645b8313beada67e39c
Robust test collections are crucial for Information Retrieval research. Recently there is a growing interest in evaluating retrieval systems for domain-specific retrieval tasks, however these tasks often lack a reliable test collection with human-ann
Externí odkaz:
http://arxiv.org/abs/2208.06936