Zobrazeno 1 - 10
of 34
pro vyhledávání: '"de Lacalle, Oier Lopez"'
Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose
Externí odkaz:
http://arxiv.org/abs/2406.07302
Cross-lingual transfer-learning is widely used in Event Extraction for low-resource languages and involves a Multilingual Language Model that is trained in a source language and applied to the target language. This paper studies whether the typologic
Externí odkaz:
http://arxiv.org/abs/2404.06392
Autor:
Salaberria, Ander, Azkune, Gorka, de Lacalle, Oier Lopez, Soroa, Aitor, Agirre, Eneko, Keller, Frank
Existing work has observed that current text-to-image systems do not accurately reflect explicit spatial relations between objects such as 'left of' or 'below'. We hypothesize that this is because explicit spatial relations rarely appear in the image
Externí odkaz:
http://arxiv.org/abs/2403.00587
Autor:
Sainz, Oscar, Campos, Jon Ander, García-Ferrero, Iker, Etxaniz, Julen, de Lacalle, Oier Lopez, Agirre, Eneko
In this position paper, we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test
Externí odkaz:
http://arxiv.org/abs/2310.18018
Autor:
Sainz, Oscar, García-Ferrero, Iker, Agerri, Rodrigo, de Lacalle, Oier Lopez, Rigau, German, Agirre, Eneko
Large Language Models (LLMs) combined with instruction tuning have made significant progress when generalizing to unseen tasks. However, they have been less successful in Information Extraction (IE), lagging behind task-specific models. Typically, IE
Externí odkaz:
http://arxiv.org/abs/2310.03668
Translate-test is a popular technique to improve the performance of multilingual language models. This approach works by translating the input into English using an external machine translation system, and running inference over the translated input.
Externí odkaz:
http://arxiv.org/abs/2308.01223
Language Models are the core for almost any Natural Language Processing system nowadays. One of their particularities is their contextualized representations, a game changer feature when a disambiguation between word senses is necessary. In this pape
Externí odkaz:
http://arxiv.org/abs/2302.03353
Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few-shot settings thanks to pre-trained entailment models. The fact that
Externí odkaz:
http://arxiv.org/abs/2205.01376
The current workflow for Information Extraction (IE) analysts involves the definition of the entities/relations of interest and a training corpus with annotated examples. In this demonstration we introduce a new workflow where the analyst directly ve
Externí odkaz:
http://arxiv.org/abs/2203.13602
Publikováno v:
Expert Systems with Applications, Volume 212, 2023, 118669
Integrating outside knowledge for reasoning in visio-linguistic tasks such as visual question answering (VQA) is an open problem. Given that pretrained language models have been shown to include world knowledge, we propose to use a unimodal (text-onl
Externí odkaz:
http://arxiv.org/abs/2109.08029