Zobrazeno 1 - 10
of 131
pro vyhledávání: '"Wang, Xuanhui"'
Autor:
Liang, Yi, Wu, You, Zhuang, Honglei, Chen, Li, Shen, Jiaming, Jia, Yiling, Qin, Zhen, Sanghai, Sumit, Wang, Xuanhui, Yang, Carl, Bendersky, Michael
Generating high-quality, in-depth textual documents, such as academic papers, news articles, Wikipedia entries, and books, remains a significant challenge for Large Language Models (LLMs). In this paper, we propose to use planning to generate long fo
Externí odkaz:
http://arxiv.org/abs/2410.06203
Autor:
Yue, Zhenrui, Zhuang, Honglei, Bai, Aijun, Hui, Kai, Jagerman, Rolf, Zeng, Hansi, Qin, Zhen, Wang, Dong, Wang, Xuanhui, Bendersky, Michael
The scaling of inference computation has unlocked the potential of long-context large language models (LLMs) across diverse settings. For knowledge-intensive tasks, the increased compute is often allocated to incorporate more external knowledge. Howe
Externí odkaz:
http://arxiv.org/abs/2410.04343
We introduce LAMPO, a novel paradigm that leverages Large Language Models (LLMs) for solving few-shot multi-class ordinal classification tasks. Unlike conventional methods, which concatenate all demonstration examples with the test instance and promp
Externí odkaz:
http://arxiv.org/abs/2408.03359
The traditional evaluation of information retrieval (IR) systems is generally very costly as it requires manual relevance annotation from human experts. Recent advancements in generative artificial intelligence -- specifically large language models (
Externí odkaz:
http://arxiv.org/abs/2407.02464
Autor:
Yan, Le, Qin, Zhen, Zhuang, Honglei, Jagerman, Rolf, Wang, Xuanhui, Bendersky, Michael, Oosterhuis, Harrie
The powerful generative abilities of large language models (LLMs) show potential in generating relevance labels for search applications. Previous work has found that directly asking about relevancy, such as ``How relevant is document A to query Q?",
Externí odkaz:
http://arxiv.org/abs/2404.11791
Autor:
Liu, Tianqi, Qin, Zhen, Wu, Junru, Shen, Jiaming, Khalman, Misha, Joshi, Rishabh, Zhao, Yao, Saleh, Mohammad, Baumgartner, Simon, Liu, Jialu, Liu, Peter J., Wang, Xuanhui
Aligning language models (LMs) with curated human feedback is critical to control their behaviors in real-world applications. Several recent policy optimization methods, such as DPO and SLiC, serve as promising alternatives to the traditional Reinfor
Externí odkaz:
http://arxiv.org/abs/2402.01878
Autor:
Li, Minghan, Zhuang, Honglei, Hui, Kai, Qin, Zhen, Lin, Jimmy, Jagerman, Rolf, Wang, Xuanhui, Bendersky, Michael
Query expansion has been widely used to improve the search results of first-stage retrievers, yet its influence on second-stage, cross-encoder rankers remains under-explored. A recent work of Weller et al. [44] shows that current expansion techniques
Externí odkaz:
http://arxiv.org/abs/2311.09175
Autor:
Drozdov, Andrew, Zhuang, Honglei, Dai, Zhuyun, Qin, Zhen, Rahimi, Razieh, Wang, Xuanhui, Alon, Dana, Iyyer, Mohit, McCallum, Andrew, Metzler, Donald, Hui, Kai
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance. In this w
Externí odkaz:
http://arxiv.org/abs/2310.14408
Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to choose from binary relevance labels like "Yes" and "No". However, the lack of
Externí odkaz:
http://arxiv.org/abs/2310.14122
Autor:
Qin, Zhen, Jagerman, Rolf, Hui, Kai, Zhuang, Honglei, Wu, Junru, Yan, Le, Shen, Jiaming, Liu, Tianqi, Liu, Jialu, Metzler, Donald, Wang, Xuanhui, Bendersky, Michael
Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and practical problem. However, researchers have found it difficult to outperform fine-tuned baseline rankers
Externí odkaz:
http://arxiv.org/abs/2306.17563