Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Salemi, Alireza"'
Autor:
Salemi, Alireza, Zamani, Hamed
This paper investigates the design of a unified search engine to serve multiple retrieval-augmented generation (RAG) agents, each with a distinct task, backbone large language model (LLM), and retrieval-augmentation strategy. We introduce an iterativ
Externí odkaz:
http://arxiv.org/abs/2410.09942
Autor:
Salemi, Alireza, Zamani, Hamed
Privacy-preserving methods for personalizing large language models (LLMs) are relatively under-explored. There are two schools of thought on this topic: (1) generating personalized outputs by personalizing the input prompt through retrieval augmentat
Externí odkaz:
http://arxiv.org/abs/2409.09510
In the field of language modeling, models augmented with retrieval components have emerged as a promising solution to address several challenges faced in the natural language processing (NLP) field, including knowledge grounding, interpretability, an
Externí odkaz:
http://arxiv.org/abs/2407.12982
Autor:
Kumar, Ishita, Viswanathan, Snigdha, Yerra, Sushrita, Salemi, Alireza, Rossi, Ryan A., Dernoncourt, Franck, Deilamsalehy, Hanieh, Chen, Xiang, Zhang, Ruiyi, Agarwal, Shubham, Lipka, Nedim, Van Nguyen, Chien, Nguyen, Thien Huu, Zamani, Hamed
Long-text generation is seemingly ubiquitous in real-world applications of large language models such as generating an email or writing a review. Despite the fundamental importance and prevalence of long-text generation in many practical applications
Externí odkaz:
http://arxiv.org/abs/2407.11016
Autor:
Salemi, Alireza, Zamani, Hamed
This paper introduces uRAG--a framework with a unified retrieval engine that serves multiple downstream retrieval-augmented generation (RAG) systems. Each RAG system consumes the retrieval results for a unique purpose, such as open-domain question an
Externí odkaz:
http://arxiv.org/abs/2405.00175
Autor:
Salemi, Alireza, Zamani, Hamed
Evaluating retrieval-augmented generation (RAG) presents challenges, particularly for retrieval models within these systems. Traditional end-to-end evaluation methods are computationally expensive. Furthermore, evaluation of the retrieval model's per
Externí odkaz:
http://arxiv.org/abs/2404.13781
This paper studies retrieval-augmented approaches for personalizing large language models (LLMs), which potentially have a substantial impact on various applications and domains. We propose the first attempt to optimize the retrieval models that deli
Externí odkaz:
http://arxiv.org/abs/2404.05970
This paper studies a category of visual question answering tasks, in which accessing external knowledge is necessary for answering the questions. This category is called outside-knowledge visual question answering (OK-VQA). A major step in developing
Externí odkaz:
http://arxiv.org/abs/2306.16478
Knowledge-Intensive Visual Question Answering (KI-VQA) refers to answering a question about an image whose answer does not lie in the image. This paper presents a new pipeline for KI-VQA tasks, consisting of a retriever and a reader. First, we introd
Externí odkaz:
http://arxiv.org/abs/2304.13649
This paper highlights the importance of personalization in large language models and introduces the LaMP benchmark -- a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evalua
Externí odkaz:
http://arxiv.org/abs/2304.11406