Zobrazeno 1 - 10
of 603
pro vyhledávání: '"REVIRIEGO, PEDRO"'
Large Language Models (LLMs) have achieved unprecedented performance on many complex tasks, being able, for example, to answer questions on almost any topic. However, they struggle with other simple tasks, such as counting the occurrences of letters
Externí odkaz:
http://arxiv.org/abs/2412.18626
One of the latest trends in generative Artificial Intelligence is tools that generate and analyze content in different modalities, such as text and images, and convert information from one to the other. From a conceptual point of view, it is interest
Externí odkaz:
http://arxiv.org/abs/2409.16297
Autor:
Mayor-Rocher, Marina, Melero, Nina, Merino-Gómez, Elena, Grandury, María, Conde, Javier, Reviriego, Pedro
Large Language Models (LLMs) have been profusely evaluated on their ability to answer questions on many topics and their performance on different natural language understanding tasks. Those tests are usually conducted in English, but most LLM users a
Externí odkaz:
http://arxiv.org/abs/2409.15334
Autor:
Wang, Ziheng, Reviriego, Pedro, Niknia, Farzad, Conde, Javier, Liu, Shanshan, Lombardi, Fabrizio
Publikováno v:
IEEE Internet of Things Journal 2023 Volume:11, Issue:8
The implementation of machine learning in Internet of Things devices poses significant operational challenges due to limited energy and computation resources. In recent years, significant efforts have been made to implement simplified ML models that
Externí odkaz:
http://arxiv.org/abs/2408.14528
Autor:
Martínez, Gonzalo, Molero, Juan Diego, González, Sandra, Conde, Javier, Brysbaert, Marc, Reviriego, Pedro
This study investigates the potential of large language models (LLMs) to provide accurate estimates of concreteness, valence and arousal for multi-word expressions. Unlike previous artificial intelligence (AI) methods, LLMs can capture the nuanced me
Externí odkaz:
http://arxiv.org/abs/2408.16012
Autor:
Conde, Javier, González, Miguel, Martínez, Gonzalo, Moral, Fernando, Merino-Gómez, Elena, Reviriego, Pedro
Publikováno v:
Presented at the GenAI Evaluation KDD2024: KDD workshop on Evaluation and Trustworthiness of Generative AI Models
Generative Artificial Intelligence image models have achieved outstanding performance in text-to-image generation and other tasks, such as inpainting that completes images with missing fragments. The performance of inpainting can be accurately measur
Externí odkaz:
http://arxiv.org/abs/2407.09549
Autor:
Plaza, Irene, Melero, Nina, del Pozo, Cristina, Conde, Javier, Reviriego, Pedro, Mayor-Rocher, Marina, Grandury, María
The evaluation of Large Language Models (LLMs) is a key element in their continuous improvement process and many benchmarks have been developed to assess the performance of LLMs in different tasks and topics. As LLMs become adopted worldwide, evaluat
Externí odkaz:
http://arxiv.org/abs/2406.17789
Stable Diffusion is a popular Transformer-based model for image generation from text; it applies an image information creator to the input text and the visual knowledge is added in a step-by-step fashion to create an image that corresponds to the inp
Externí odkaz:
http://arxiv.org/abs/2404.00352
The wide adoption of Large language models (LLMs) makes their dependability a pressing concern. Detection of errors is the first step to mitigating their impact on a system and thus, efficient error detection for LLMs is an important issue. In many s
Externí odkaz:
http://arxiv.org/abs/2403.16393
Autor:
Conde, Javier, González, Miguel, Melero, Nina, Ferrando, Raquel, Martínez, Gonzalo, Merino-Gómez, Elena, Hernández, José Alberto, Reviriego, Pedro
Publikováno v:
Procesamiento del Lenguaje Natural, n. 73, 2024. http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6603
The growing interest in Large Language Models (LLMs) and in particular in conversational models with which users can interact has led to the development of a large number of open-source chat LLMs. These models are evaluated on a wide range of benchma
Externí odkaz:
http://arxiv.org/abs/2403.15491