Zobrazeno 1 - 10
of 914
pro vyhledávání: '"Hui, Kai"'
Autor:
Lee, Jinhyuk, Dai, Zhuyun, Ren, Xiaoqi, Chen, Blair, Cer, Daniel, Cole, Jeremy R., Hui, Kai, Boratko, Michael, Kapadia, Rajvi, Ding, Wen, Luan, Yi, Duddu, Sai Meher Karthik, Abrego, Gustavo Hernandez, Shi, Weiqiang, Gupta, Nithi, Kusupati, Aditya, Jain, Prateek, Jonnalagadda, Siddhartha Reddy, Chang, Ming-Wei, Naim, Iftekhar
We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins wi
Externí odkaz:
http://arxiv.org/abs/2403.20327
Autor:
Li, Minghan, Zhuang, Honglei, Hui, Kai, Qin, Zhen, Lin, Jimmy, Jagerman, Rolf, Wang, Xuanhui, Bendersky, Michael
Query expansion has been widely used to improve the search results of first-stage retrievers, yet its influence on second-stage, cross-encoder rankers remains under-explored. A recent work of Weller et al. [44] shows that current expansion techniques
Externí odkaz:
http://arxiv.org/abs/2311.09175
Autor:
Drozdov, Andrew, Zhuang, Honglei, Dai, Zhuyun, Qin, Zhen, Rahimi, Razieh, Wang, Xuanhui, Alon, Dana, Iyyer, Mohit, McCallum, Andrew, Metzler, Donald, Hui, Kai
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance. In this w
Externí odkaz:
http://arxiv.org/abs/2310.14408
Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to choose from binary relevance labels like "Yes" and "No". However, the lack of
Externí odkaz:
http://arxiv.org/abs/2310.14122
Autor:
Qin, Zhen, Jagerman, Rolf, Hui, Kai, Zhuang, Honglei, Wu, Junru, Yan, Le, Shen, Jiaming, Liu, Tianqi, Liu, Jialu, Metzler, Donald, Wang, Xuanhui, Bendersky, Michael
Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and practical problem. However, researchers have found it difficult to outperform fine-tuned baseline rankers
Externí odkaz:
http://arxiv.org/abs/2306.17563
Autor:
Qin, Zhen, Jagerman, Rolf, Pasumarthi, Rama, Zhuang, Honglei, Zhang, He, Bai, Aijun, Hui, Kai, Yan, Le, Wang, Xuanhui
The distillation of ranking models has become an important topic in both academia and industry. In recent years, several advanced methods have been proposed to tackle this problem, often leveraging ranking information from teacher rankers that is abs
Externí odkaz:
http://arxiv.org/abs/2306.04455
Autor:
Pradeep, Ronak, Hui, Kai, Gupta, Jai, Lelkes, Adam D., Zhuang, Honglei, Lin, Jimmy, Metzler, Donald, Tran, Vinh Q.
Popularized by the Differentiable Search Index, the emerging paradigm of generative retrieval re-frames the classic information retrieval problem into a sequence-to-sequence modeling task, forgoing external indices and encoding an entire document cor
Externí odkaz:
http://arxiv.org/abs/2305.11841
Autor:
Xian, Ruicheng, Zhuang, Honglei, Qin, Zhen, Zamani, Hamed, Lu, Jing, Ma, Ji, Hui, Kai, Zhao, Han, Wang, Xuanhui, Bendersky, Michael
Domain adaptation aims to transfer the knowledge learned on (data-rich) source domains to (low-resource) target domains, and a popular method is invariant representation learning, which matches and aligns the data distributions on the feature space.
Externí odkaz:
http://arxiv.org/abs/2212.10764
Autor:
Bohnet, Bernd, Tran, Vinh Q., Verga, Pat, Aharoni, Roee, Andor, Daniel, Soares, Livio Baldini, Ciaramita, Massimiliano, Eisenstein, Jacob, Ganchev, Kuzman, Herzig, Jonathan, Hui, Kai, Kwiatkowski, Tom, Ma, Ji, Ni, Jianmo, Saralegui, Lierni Sestorain, Schuster, Tal, Cohen, William W., Collins, Michael, Das, Dipanjan, Metzler, Donald, Petrov, Slav, Webster, Kellie
Large language models (LLMs) have shown impressive results while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribu
Externí odkaz:
http://arxiv.org/abs/2212.08037