Zobrazeno 1 - 10
of 78
pro vyhledávání: '"Zhuang, Honglei"'
Autor:
Liang, Yi, Wu, You, Zhuang, Honglei, Chen, Li, Shen, Jiaming, Jia, Yiling, Qin, Zhen, Sanghai, Sumit, Wang, Xuanhui, Yang, Carl, Bendersky, Michael
Generating high-quality, in-depth textual documents, such as academic papers, news articles, Wikipedia entries, and books, remains a significant challenge for Large Language Models (LLMs). In this paper, we propose to use planning to generate long fo
Externí odkaz:
http://arxiv.org/abs/2410.06203
Autor:
Yue, Zhenrui, Zhuang, Honglei, Bai, Aijun, Hui, Kai, Jagerman, Rolf, Zeng, Hansi, Qin, Zhen, Wang, Dong, Wang, Xuanhui, Bendersky, Michael
The scaling of inference computation has unlocked the potential of long-context large language models (LLMs) across diverse settings. For knowledge-intensive tasks, the increased compute is often allocated to incorporate more external knowledge. Howe
Externí odkaz:
http://arxiv.org/abs/2410.04343
Knowledge-intensive visual question answering requires models to effectively use external knowledge to help answer visual questions. A typical pipeline includes a knowledge retriever and an answer generator. However, a retriever that utilizes local i
Externí odkaz:
http://arxiv.org/abs/2407.12277
The most recent pointwise Large Language Model (LLM) rankers have achieved remarkable ranking results. However, these rankers are hindered by two major drawbacks: (1) they fail to follow a standardized comparison guidance during the ranking process,
Externí odkaz:
http://arxiv.org/abs/2404.11960
Autor:
Yan, Le, Qin, Zhen, Zhuang, Honglei, Jagerman, Rolf, Wang, Xuanhui, Bendersky, Michael, Oosterhuis, Harrie
The powerful generative abilities of large language models (LLMs) show potential in generating relevance labels for search applications. Previous work has found that directly asking about relevancy, such as ``How relevant is document A to query Q?",
Externí odkaz:
http://arxiv.org/abs/2404.11791
Autor:
Li, Minghan, Zhuang, Honglei, Hui, Kai, Qin, Zhen, Lin, Jimmy, Jagerman, Rolf, Wang, Xuanhui, Bendersky, Michael
Query expansion has been widely used to improve the search results of first-stage retrievers, yet its influence on second-stage, cross-encoder rankers remains under-explored. A recent work of Weller et al. [44] shows that current expansion techniques
Externí odkaz:
http://arxiv.org/abs/2311.09175
Autor:
Drozdov, Andrew, Zhuang, Honglei, Dai, Zhuyun, Qin, Zhen, Rahimi, Razieh, Wang, Xuanhui, Alon, Dana, Iyyer, Mohit, McCallum, Andrew, Metzler, Donald, Hui, Kai
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance. In this w
Externí odkaz:
http://arxiv.org/abs/2310.14408
Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to choose from binary relevance labels like "Yes" and "No". However, the lack of
Externí odkaz:
http://arxiv.org/abs/2310.14122
We propose a novel zero-shot document ranking approach based on Large Language Models (LLMs): the Setwise prompting approach. Our approach complements existing prompting approaches for LLM-based zero-shot ranking: Pointwise, Pairwise, and Listwise. T
Externí odkaz:
http://arxiv.org/abs/2310.09497
Autor:
Qin, Zhen, Jagerman, Rolf, Hui, Kai, Zhuang, Honglei, Wu, Junru, Yan, Le, Shen, Jiaming, Liu, Tianqi, Liu, Jialu, Metzler, Donald, Wang, Xuanhui, Bendersky, Michael
Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and practical problem. However, researchers have found it difficult to outperform fine-tuned baseline rankers
Externí odkaz:
http://arxiv.org/abs/2306.17563