Zobrazeno 1 - 10
of 1 232
pro vyhledávání: '"Slobodkin A"'
Text-to-image (T2I) models are remarkable at generating realistic images based on textual descriptions. However, textual prompts are inherently underspecified: they do not specify all possible attributes of the required image. This raises two key que
Externí odkaz:
http://arxiv.org/abs/2410.22592
Autor:
Roit, Paul, Slobodkin, Aviv, Hirsch, Eran, Cattan, Arie, Klein, Ayal, Pyatkin, Valentina, Dagan, Ido
Detecting semantic arguments of a predicate word has been conventionally modeled as a sentence-level task. The typical reader, however, perfectly interprets predicate-argument relations in a much wider context than just the sentence where the predica
Externí odkaz:
http://arxiv.org/abs/2408.04246
Autor:
Bitton-Guetta, Nitzan, Slobodkin, Aviv, Maimon, Aviya, Habba, Eliya, Rassin, Royi, Bitton, Yonatan, Szpektor, Idan, Globerson, Amir, Elovici, Yuval
Imagine observing someone scratching their arm; to understand why, additional context would be necessary. However, spotting a mosquito nearby would immediately offer a likely explanation for the person's discomfort, thereby alleviating the need for f
Externí odkaz:
http://arxiv.org/abs/2407.19474
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP
Improvements in language models' capabilities have pushed their applications towards longer contexts, making long-context evaluation and development an active research area. However, many disparate use-cases are grouped together under the umbrella te
Externí odkaz:
http://arxiv.org/abs/2407.00402
Autor:
Ernst, Ori, Shapira, Ori, Slobodkin, Aviv, Adar, Sharon, Bansal, Mohit, Goldberger, Jacob, Levy, Ran, Dagan, Ido
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection, followed by text generation. In this context, alignment of corresponding sentences between a reference summary and its source
Externí odkaz:
http://arxiv.org/abs/2406.00842
Publikováno v:
Phys. Rev. Lett. 133, 173801 (2024)
A Coherent Perfect Absorber (CPA) exploits the interferometric nature of light to deposit all of a light field's incident energy into an otherwise weakly absorbing sample. The downside of this concept is that the necessary destructive interference in
Externí odkaz:
http://arxiv.org/abs/2404.04151
Recent efforts to address hallucinations in Large Language Models (LLMs) have focused on attributed text generation, which supplements generated texts with citations of supporting sources for post-generation fact-checking and corrections. Yet, these
Externí odkaz:
http://arxiv.org/abs/2403.17104
Grounded text generation, encompassing tasks such as long-form question-answering and summarization, necessitates both content selection and content consolidation. Current end-to-end methods are difficult to control and interpret due to their opaquen
Externí odkaz:
http://arxiv.org/abs/2403.15351
Large language models (LLMs) have been shown to possess impressive capabilities, while also raising crucial concerns about the faithfulness of their responses. A primary issue arising in this context is the management of (un)answerable queries by LLM
Externí odkaz:
http://arxiv.org/abs/2310.11877
The recently introduced Controlled Text Reduction (CTR) task isolates the text generation step within typical summarization-style tasks. It does so by challenging models to generate coherent text conforming to pre-selected content within the input te
Externí odkaz:
http://arxiv.org/abs/2310.09017