Zobrazeno 1 - 10
of 269
pro vyhledávání: '"ZAMANI, HAMED"'
In the fast-evolving field of information retrieval (IR), the integration of generative AI technologies such as large language models (LLMs) is transforming how users search for and interact with information. Recognizing this paradigm shift at the in
Externí odkaz:
http://arxiv.org/abs/2412.02043
Autor:
Mysore, Sheshera, Dhanania, Garima, Patil, Kishor, Kallumadi, Surya, McCallum, Andrew, Zamani, Hamed
Personalized search represents a problem where retrieval models condition on historical user interaction data in order to improve retrieval results. However, personalization is commonly perceived as opaque and not amenable to control by users. Furthe
Externí odkaz:
http://arxiv.org/abs/2411.02790
Autor:
Zhang, Zhehao, Rossi, Ryan A., Kveton, Branislav, Shao, Yijia, Yang, Diyi, Zamani, Hamed, Dernoncourt, Franck, Barrow, Joe, Yu, Tong, Kim, Sungchul, Zhang, Ruiyi, Gu, Jiuxiang, Derr, Tyler, Chen, Hongjie, Wu, Junda, Chen, Xiang, Wang, Zichao, Mitra, Subrata, Lipka, Nedim, Ahmed, Nesreen, Wang, Yu
Personalization of Large Language Models (LLMs) has recently become increasingly important with a wide range of applications. Despite the importance and recent progress, most existing works on personalized LLMs have focused either entirely on (a) per
Externí odkaz:
http://arxiv.org/abs/2411.00027
Autor:
Salemi, Alireza, Zamani, Hamed
This paper investigates the design of a unified search engine to serve multiple retrieval-augmented generation (RAG) agents, each with a distinct task, backbone large language model (LLM), and retrieval-augmentation strategy. We introduce an iterativ
Externí odkaz:
http://arxiv.org/abs/2410.09942
Autor:
Atmakuru, Anirudh, Nainani, Jatin, Bheemreddy, Rohith Siddhartha Reddy, Lakkaraju, Anirudh, Yao, Zonghai, Zamani, Hamed, Chang, Haw-Shiuan
Evaluating the creativity of large language models (LLMs) in story writing is difficult because LLM-generated stories could seemingly look creative but be very similar to some existing stories in their huge and proprietary training corpus. To overcom
Externí odkaz:
http://arxiv.org/abs/2410.04197
Autor:
Salemi, Alireza, Zamani, Hamed
Privacy-preserving methods for personalizing large language models (LLMs) are relatively under-explored. There are two schools of thought on this topic: (1) generating personalized outputs by personalizing the input prompt through retrieval augmentat
Externí odkaz:
http://arxiv.org/abs/2409.09510
In the field of language modeling, models augmented with retrieval components have emerged as a promising solution to address several challenges faced in the natural language processing (NLP) field, including knowledge grounding, interpretability, an
Externí odkaz:
http://arxiv.org/abs/2407.12982
Knowledge-intensive visual question answering requires models to effectively use external knowledge to help answer visual questions. A typical pipeline includes a knowledge retriever and an answer generator. However, a retriever that utilizes local i
Externí odkaz:
http://arxiv.org/abs/2407.12277
At its core, information access and seeking is an interactive process. In existing search engines, interactions are limited to a few pre-defined actions, such as "requery", "click on a document", "scrolling up/down", "going to the next result page",
Externí odkaz:
http://arxiv.org/abs/2407.11605
Autor:
Dhanania, Garima, Mysore, Sheshera, Pham, Chau Minh, Iyyer, Mohit, Zamani, Hamed, McCallum, Andrew
Topic models are widely used to analyze document collections. While they are valuable for discovering latent topics in a corpus when analysts are unfamiliar with the corpus, analysts also commonly start with an understanding of the content present in
Externí odkaz:
http://arxiv.org/abs/2406.19928