Zobrazeno 1 - 10
of 2 749
pro vyhledávání: '"Bendersky A"'
Autor:
Tu, Jianhong, Ni, Zhuohao, Crispino, Nicholas, Yu, Zihao, Bendersky, Michael, Gunel, Beliz, Jia, Ruoxi, Liu, Xin, Lyu, Lingjuan, Song, Dawn, Wang, Chenguang
We present a novel instruction tuning recipe to improve the zero-shot task generalization of multimodal large language models. In contrast to existing instruction tuning mechanisms that heavily rely on visual instructions, our approach focuses on lan
Externí odkaz:
http://arxiv.org/abs/2411.10557
Autor:
Feldman, Virginia, Bendersky, Ariel
Quantum process tomography is a useful tool for characterizing quantum processes. This task is essential for the development of different areas, such as quantum information processing. We present a protocol for selective continuous-variable quantum p
Externí odkaz:
http://arxiv.org/abs/2410.17516
Autor:
Liang, Yi, Wu, You, Zhuang, Honglei, Chen, Li, Shen, Jiaming, Jia, Yiling, Qin, Zhen, Sanghai, Sumit, Wang, Xuanhui, Yang, Carl, Bendersky, Michael
Generating high-quality, in-depth textual documents, such as academic papers, news articles, Wikipedia entries, and books, remains a significant challenge for Large Language Models (LLMs). In this paper, we propose to use planning to generate long fo
Externí odkaz:
http://arxiv.org/abs/2410.06203
Autor:
Yue, Zhenrui, Zhuang, Honglei, Bai, Aijun, Hui, Kai, Jagerman, Rolf, Zeng, Hansi, Qin, Zhen, Wang, Dong, Wang, Xuanhui, Bendersky, Michael
The scaling of inference computation has unlocked the potential of long-context large language models (LLMs) across diverse settings. For knowledge-intensive tasks, the increased compute is often allocated to incorporate more external knowledge. Howe
Externí odkaz:
http://arxiv.org/abs/2410.04343
Autor:
Feldman, Virginia, Bendersky, Ariel
Publikováno v:
Phys. Rev. A 105, 032453 (2022)
We present a protocol that allows the estimation of any density matrix element for continuous-variable quantum states, without resorting to the complete reconstruction of the full density matrix. The algorithm adaptatively discretizes the state and t
Externí odkaz:
http://arxiv.org/abs/2409.16242
Retrieval Augmented Generation (RAG) has been a powerful tool for Large Language Models (LLMs) to efficiently process overly lengthy contexts. However, recent LLMs like Gemini-1.5 and GPT-4 show exceptional capabilities to understand long contexts di
Externí odkaz:
http://arxiv.org/abs/2407.16833
Autor:
Shen, Jiaming, Xu, Ran, Jun, Yennie, Qin, Zhen, Liu, Tianqi, Yang, Carl, Liang, Yi, Baumgartner, Simon, Bendersky, Michael
Reward models (RMs) are crucial for aligning large language models (LLMs) with human preferences. They are trained using preference datasets where each example consists of one input prompt, two responses, and a preference label. As curating a high-qu
Externí odkaz:
http://arxiv.org/abs/2407.16008
Autor:
Shen, Jiaming, Liu, Tianqi, Liu, Jialu, Qin, Zhen, Pavagadhi, Jay, Baumgartner, Simon, Bendersky, Michael
The popularity of automated news headline generation has surged with advancements in pre-trained language models. However, these models often suffer from the ``hallucination'' problem, where the generated headline is not fully supported by its source
Externí odkaz:
http://arxiv.org/abs/2407.15975
Knowledge-intensive visual question answering requires models to effectively use external knowledge to help answer visual questions. A typical pipeline includes a knowledge retriever and an answer generator. However, a retriever that utilizes local i
Externí odkaz:
http://arxiv.org/abs/2407.12277
The traditional evaluation of information retrieval (IR) systems is generally very costly as it requires manual relevance annotation from human experts. Recent advancements in generative artificial intelligence -- specifically large language models (
Externí odkaz:
http://arxiv.org/abs/2407.02464