Zobrazeno 1 - 10
of 69
pro vyhledávání: '"Melero, Maite"'
This paper studies gender bias in machine translation through the lens of Large Language Models (LLMs). Four widely-used test sets are employed to benchmark various base LLMs, comparing their translation quality and gender bias against state-of-the-a
Externí odkaz:
http://arxiv.org/abs/2407.18786
Autor:
Gilabert, Javier García, Escolano, Carlos, Savall, Aleix Sant, Fornaciari, Francesca De Luca, Mash, Audrey, Liao, Xixian, Melero, Maite
In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methods predominantly relied on iterative pro
Externí odkaz:
http://arxiv.org/abs/2406.09140
In this work, we explore idiomatic language processing with Large Language Models (LLMs). We introduce the Idiomatic language Test Suite IdioTS, a new dataset of difficult examples specifically designed by language experts to assess the capabilities
Externí odkaz:
http://arxiv.org/abs/2405.10579
Autor:
de Gibert, Ona, Kharitonova, Ksenia, Figueras, Blanca Calvo, Armengol-Estapé, Jordi, Melero, Maite
In this work, we introduce sequence-to-sequence language resources for Catalan, a moderately under-resourced language, towards two tasks, namely: Summarization and Machine Translation (MT). We present two new abstractive summarization datasets in the
Externí odkaz:
http://arxiv.org/abs/2202.06871
Autor:
Rodriguez-Penagos, Carlos, Armentano-Oller, Carme, Villegas, Marta, Melero, Maite, Gonzalez, Aitor, Bonet, Ona de Gibert, Pio, Casimiro Carrino
The Catalan Language Understanding Benchmark (CLUB) encompasses various datasets representative of different NLU tasks that enable accurate evaluations of language models, following the General Language Understanding Evaluation (GLUE) example. It is
Externí odkaz:
http://arxiv.org/abs/2112.01894
Generative Pre-trained Transformers (GPTs) have recently been scaled to unprecedented sizes in the history of machine learning. These models, solely trained on the language modeling objective, have been shown to exhibit outstanding few-shot learning
Externí odkaz:
http://arxiv.org/abs/2108.13349
Autor:
Armengol-Estapé, Jordi, Carrino, Casimiro Pio, Rodriguez-Penagos, Carlos, Bonet, Ona de Gibert, Armentano-Oller, Carme, Gonzalez-Agirre, Aitor, Melero, Maite, Villegas, Marta
Multilingual language models have been a crucial breakthrough as they considerably reduce the need of data for under-resourced languages. Nevertheless, the superiority of language-specific models has already been proven for languages having access to
Externí odkaz:
http://arxiv.org/abs/2107.07903
B cell focused transient immune suppression protocol for efficient AAV readministration to the liver
Autor:
Rana, Jyoti, Herzog, Roland W., Muñoz-Melero, Maite, Yamada, Kentaro, Kumar, Sandeep R.P., Lam, Anh K., Markusic, David M., Duan, Dongsheng, Terhorst, Cox, Byrne, Barry J., Corti, Manuela, Biswas, Moanaro
Publikováno v:
In Molecular Therapy - Methods & Clinical Development 14 March 2024 32(1)
Autor:
Kumar, Sandeep R.P., Biswas, Moanaro, Cao, Di, Arisa, Sreevani, Muñoz-Melero, Maite, Lam, Anh K., Piñeros, Annie R., Kapur, Reuben, Kaisho, Tsuneyasu, Kaufman, Randal J., Xiao, Weidong, Shayakhmetov, Dmitry M., Terhorst, Cox, de Jong, Ype P., Herzog, Roland W.
Publikováno v:
In Molecular Therapy 7 February 2024 32(2):325-339