Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Slobodkin, Aviv"'
Autor:
Roit, Paul, Slobodkin, Aviv, Hirsch, Eran, Cattan, Arie, Klein, Ayal, Pyatkin, Valentina, Dagan, Ido
Detecting semantic arguments of a predicate word has been conventionally modeled as a sentence-level task. The typical reader, however, perfectly interprets predicate-argument relations in a much wider context than just the sentence where the predica
Externí odkaz:
http://arxiv.org/abs/2408.04246
Autor:
Bitton-Guetta, Nitzan, Slobodkin, Aviv, Maimon, Aviya, Habba, Eliya, Rassin, Royi, Bitton, Yonatan, Szpektor, Idan, Globerson, Amir, Elovici, Yuval
Imagine observing someone scratching their arm; to understand why, additional context would be necessary. However, spotting a mosquito nearby would immediately offer a likely explanation for the person's discomfort, thereby alleviating the need for f
Externí odkaz:
http://arxiv.org/abs/2407.19474
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP
Improvements in language models' capabilities have pushed their applications towards longer contexts, making long-context evaluation and development an active research area. However, many disparate use-cases are grouped together under the umbrella te
Externí odkaz:
http://arxiv.org/abs/2407.00402
Autor:
Ernst, Ori, Shapira, Ori, Slobodkin, Aviv, Adar, Sharon, Bansal, Mohit, Goldberger, Jacob, Levy, Ran, Dagan, Ido
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection, followed by text generation. In this context, alignment of corresponding sentences between a reference summary and its source
Externí odkaz:
http://arxiv.org/abs/2406.00842
Recent efforts to address hallucinations in Large Language Models (LLMs) have focused on attributed text generation, which supplements generated texts with citations of supporting sources for post-generation fact-checking and corrections. Yet, these
Externí odkaz:
http://arxiv.org/abs/2403.17104
Grounded text generation, encompassing tasks such as long-form question-answering and summarization, necessitates both content selection and content consolidation. Current end-to-end methods are difficult to control and interpret due to their opaquen
Externí odkaz:
http://arxiv.org/abs/2403.15351
Large language models (LLMs) have been shown to possess impressive capabilities, while also raising crucial concerns about the faithfulness of their responses. A primary issue arising in this context is the management of (un)answerable queries by LLM
Externí odkaz:
http://arxiv.org/abs/2310.11877
The recently introduced Controlled Text Reduction (CTR) task isolates the text generation step within typical summarization-style tasks. It does so by challenging models to generate coherent text conforming to pre-selected content within the input te
Externí odkaz:
http://arxiv.org/abs/2310.09017
Current approaches for text summarization are predominantly automatic, with rather limited space for human intervention and control over the process. In this paper, we introduce SummHelper, a 2-phase summarization assistant designed to foster human-m
Externí odkaz:
http://arxiv.org/abs/2308.08363
Producing a reduced version of a source text, as in generic or focused summarization, inherently involves two distinct subtasks: deciding on targeted content and generating a coherent text conveying it. While some popular approaches address summariza
Externí odkaz:
http://arxiv.org/abs/2210.13449