Zobrazeno 1 - 10
of 18 214
pro vyhledávání: '"Herzig A"'
We present the architecture of a fully autonomous, bio-inspired cognitive agent built around a spiking neural network (SNN) implementing the agent's semantic memory. The agent explores its universe and learns concepts of objects/situations and of its
Externí odkaz:
http://arxiv.org/abs/2411.12308
Large language models (LLMs) are susceptible to hallucinations-outputs that are ungrounded, factually incorrect, or inconsistent with prior generations. We focus on close-book Question Answering (CBQA), where previous work has not fully addressed the
Externí odkaz:
http://arxiv.org/abs/2410.22071
Recently, Large Language Models (LLMs) have achieved remarkable success using in-context learning (ICL) in the language domain. However, leveraging the ICL capabilities within LLMs to directly predict robot actions remains largely unexplored. In this
Externí odkaz:
http://arxiv.org/abs/2410.12782
Publikováno v:
Marquis, Pierre; Papini, Odile; Prade, Henri. A Guided Tour of Artificial Intelligence Research, 1 / 3, Springer International Publishing, pp.487-518, 2020, Knowledge Representation, Reasoning and Learning, 978-3-030-06163-0
The purpose of this book is to provide an overview of AI research, ranging from basic work to interfaces and applications, with as much emphasis on results as on current issues. It is aimed at an audience of master students and Ph.D. students, and ca
Externí odkaz:
http://arxiv.org/abs/2406.18930
Autor:
Huang, Brandon, Mitra, Chancharik, Arbelle, Assaf, Karlinsky, Leonid, Darrell, Trevor, Herzig, Roei
The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, this many-shot multimodal ICL setting has one crucial p
Externí odkaz:
http://arxiv.org/abs/2406.15334
Autor:
Cattan, Arie, Jacovi, Alon, Fabrikant, Alex, Herzig, Jonathan, Aharoni, Roee, Rashkin, Hannah, Marcus, Dror, Hassidim, Avinatan, Matias, Yossi, Szpektor, Idan, Caciularu, Avi
Despite recent advancements in Large Language Models (LLMs), their performance on tasks involving long contexts remains sub-optimal. In-Context Learning (ICL) with few-shot examples may be an appealing solution to enhance LLM performance in this scen
Externí odkaz:
http://arxiv.org/abs/2406.13632
Recently, Large Language Models (LLMs) attained impressive performance in math and reasoning benchmarks. However, they still often struggle with logic problems and puzzles that are relatively easy for humans. To further investigate this, we introduce
Externí odkaz:
http://arxiv.org/abs/2406.12172
Autor:
Niu, Dantong, Sharma, Yuvan, Biamby, Giscard, Quenum, Jerome, Bai, Yutong, Shi, Baifeng, Darrell, Trevor, Herzig, Roei
In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robot
Externí odkaz:
http://arxiv.org/abs/2406.11815
Autor:
Huang, Irene, Lin, Wei, Mirza, M. Jehanzeb, Hansen, Jacob A., Doveh, Sivan, Butoi, Victor Ion, Herzig, Roei, Arbelle, Assaf, Kuehne, Hilde, Darrell, Trevor, Gan, Chuang, Oliva, Aude, Feris, Rogerio, Karlinsky, Leonid
Compositional Reasoning (CR) entails grasping the significance of attributes, relations, and word order. Recent Vision-Language Models (VLMs), comprising a visual encoder and a Large Language Model (LLM) decoder, have demonstrated remarkable proficie
Externí odkaz:
http://arxiv.org/abs/2406.08164
Autor:
Caciularu, Avi, Jacovi, Alon, Ben-David, Eyal, Goldshtein, Sasha, Schuster, Tal, Herzig, Jonathan, Elidan, Gal, Globerson, Amir
Large Language Models (LLMs) often do not perform well on queries that require the aggregation of information across texts. To better evaluate this setting and facilitate modeling efforts, we introduce TACT - Text And Calculations through Tables, a d
Externí odkaz:
http://arxiv.org/abs/2406.03618