Zobrazeno 1 - 10
of 80
pro vyhledávání: '"Szpektor, Idan"'
Recent research increasingly focuses on training vision-language models (VLMs) with long, detailed image captions. However, small-scale VLMs often struggle to balance the richness of these captions with the risk of hallucinating content during fine-t
Externí odkaz:
http://arxiv.org/abs/2411.09018
Multi-document (MD) processing is crucial for LLMs to handle real-world tasks such as summarization and question-answering across large sets of documents. While LLMs have improved at processing long inputs, MD contexts still present challenges, such
Externí odkaz:
http://arxiv.org/abs/2410.23463
Large language models (LLMs) are susceptible to hallucinations-outputs that are ungrounded, factually incorrect, or inconsistent with prior generations. We focus on close-book Question Answering (CBQA), where previous work has not fully addressed the
Externí odkaz:
http://arxiv.org/abs/2410.22071
NLP benchmarks rely on standardized datasets for training and evaluating models and are crucial for advancing the field. Traditionally, expert annotations ensure high-quality labels; however, the cost of expert annotation does not scale well with the
Externí odkaz:
http://arxiv.org/abs/2410.18889
Autor:
Cattan, Arie, Roit, Paul, Zhang, Shiyue, Wan, David, Aharoni, Roee, Szpektor, Idan, Bansal, Mohit, Dagan, Ido
There has been an increasing interest in detecting hallucinations in model-generated texts, both manually and automatically, at varying levels of granularity. However, most existing methods fail to precisely pinpoint the errors. In this work, we intr
Externí odkaz:
http://arxiv.org/abs/2410.07473
Autor:
Orgad, Hadas, Toker, Michael, Gekhman, Zorik, Reichart, Roi, Szpektor, Idan, Kotek, Hadas, Belinkov, Yonatan
Large language models (LLMs) often produce errors, including factual inaccuracies, biases, and reasoning failures, collectively referred to as "hallucinations". Recent studies have demonstrated that LLMs' internal states encode information regarding
Externí odkaz:
http://arxiv.org/abs/2410.02707
Beneath the Surface of Consistency: Exploring Cross-lingual Knowledge Representation Sharing in LLMs
The veracity of a factoid is largely independent of the language it is written in. However, language models are inconsistent in their ability to answer the same factual question across languages. This raises questions about how LLMs represent a given
Externí odkaz:
http://arxiv.org/abs/2408.10646
Autor:
Bitton-Guetta, Nitzan, Slobodkin, Aviv, Maimon, Aviya, Habba, Eliya, Rassin, Royi, Bitton, Yonatan, Szpektor, Idan, Globerson, Amir, Elovici, Yuval
Imagine observing someone scratching their arm; to understand why, additional context would be necessary. However, spotting a mosquito nearby would immediately offer a likely explanation for the person's discomfort, thereby alleviating the need for f
Externí odkaz:
http://arxiv.org/abs/2407.19474
Generated video scenes for action-centric sequence descriptions like recipe instructions and do-it-yourself projects include non-linear patterns, in which the next video may require to be visually consistent not on the immediate previous video but on
Externí odkaz:
http://arxiv.org/abs/2407.11814
The performance of Large Vision Language Models (LVLMs) is dependent on the size and quality of their training datasets. Existing video instruction tuning datasets lack diversity as they are derived by prompting large language models with video capti
Externí odkaz:
http://arxiv.org/abs/2407.06189