Zobrazeno 1 - 10
of 241
pro vyhledávání: '"Specia, Lucia"'
Autor:
Luo, Haoyan, Specia, Lucia
Transformer-based Large Language Models (LLMs) traditionally rely on final-layer loss for training and final-layer representations for predictions, potentially overlooking the predictive power embedded in intermediate layers. Surprisingly, we find th
Externí odkaz:
http://arxiv.org/abs/2410.13077
Pretrained language models have significantly advanced performance across various natural language processing tasks. However, adversarial attacks continue to pose a critical challenge to system built using these models, as they can be exploited with
Externí odkaz:
http://arxiv.org/abs/2407.00248
Autor:
Wang, Guorun, Specia, Lucia
Text-to-image models are known to propagate social biases. For example, when prompted to generate images of people in certain professions, these models tend to systematically generate specific genders or ethnicities. In this paper, we show that this
Externí odkaz:
http://arxiv.org/abs/2407.11002
Publikováno v:
Computational Linguistics, Vol 46, Iss 1, Pp 135-187 (2020)
Sentence Simplification (SS) aims to modify a sentence in order to make it easier to read and understand. In order to do so, several rewriting transformations can be performed such as replacement, reordering, and splitting. Executing these transforma
Externí odkaz:
https://doaj.org/article/78e8b586e63a403bb8d3fb532c661043
Autor:
Fomicheva, Marina, Specia, Lucia
Publikováno v:
Computational Linguistics, Vol 45, Iss 3, Pp 515-558 (2019)
Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new metrics devised every year. Evaluation metrics are generally benchmarked against manual assessment of translation quality, with performance measured i
Externí odkaz:
https://doaj.org/article/ac4bc9aad33d4e7fb18cc6cf6e6474ed
Autor:
Luo, Haoyan, Specia, Lucia
Explainability for Large Language Models (LLMs) is a critical yet challenging aspect of natural language processing. As LLMs are increasingly integral to diverse applications, their "black-box" nature sparks significant concerns regarding transparenc
Externí odkaz:
http://arxiv.org/abs/2401.12874
Neural conditional language generation models achieve the state-of-the-art in Neural Machine Translation (NMT) but are highly dependent on the quality of parallel training dataset. When trained on low-quality datasets, these models are prone to vario
Externí odkaz:
http://arxiv.org/abs/2211.09878
Scene Text Recognition (STR) models have achieved high performance in recent years on benchmark datasets where text images are presented with minimal noise. Traditional STR recognition pipelines take a cropped image as sole input and attempt to ident
Externí odkaz:
http://arxiv.org/abs/2210.10836
Despite recent progress in video and language representation learning, the weak or sparse correspondence between the two modalities remains a bottleneck in the area. Most video-language models are trained via pair-level loss to predict whether a pair
Externí odkaz:
http://arxiv.org/abs/2210.05039