Zobrazeno 1 - 10
of 160
pro vyhledávání: '"Wang, Shuaiqiang"'
Autor:
Wu, Jiayi, Sun, Hao, Cai, Hengyi, Su, Lixin, Wang, Shuaiqiang, Yin, Dawei, Li, Xiang, Gao, Ming
The number of large language models (LLMs) with varying parameter scales and vocabularies is increasing. While they deliver powerful performance, they also face a set of common optimization needs to meet specific requirements or standards, such as in
Externí odkaz:
http://arxiv.org/abs/2410.17599
Autor:
Sun, Hao, Wu, Jiayi, Cai, Hengyi, Wei, Xiaochi, Feng, Yue, Wang, Bo, Wang, Shuaiqiang, Zhang, Yan, Yin, Dawei
Recent advancements in large language models (LLMs) have been remarkable. Users face a choice between using cloud-based LLMs for generation quality and deploying local-based LLMs for lower computational cost. The former option is typically costly and
Externí odkaz:
http://arxiv.org/abs/2410.13181
Autor:
Qu, Changle, Dai, Sunhao, Wei, Xiaochi, Cai, Hengyi, Wang, Shuaiqiang, Yin, Dawei, Xu, Jun, Wen, Ji-Rong
Tool learning enables Large Language Models (LLMs) to interact with external environments by invoking tools, serving as an effective strategy to mitigate the limitations inherent in their pre-training data. In this process, tool documentation plays a
Externí odkaz:
http://arxiv.org/abs/2410.08197
Autor:
Li, Yuchen, Xiong, Haoyi, Kong, Linghe, Bian, Jiang, Wang, Shuaiqiang, Chen, Guihai, Yin, Dawei
Learning to rank (LTR) is widely employed in web searches to prioritize pertinent webpages from retrieved content based on input queries. However, traditional LTR models encounter two principal obstacles that lead to suboptimal performance: (1) the l
Externí odkaz:
http://arxiv.org/abs/2409.16594
Autor:
Li, Yuchen, Xiong, Haoyi, Kong, Linghe, Sun, Zeyi, Chen, Hongyang, Wang, Shuaiqiang, Yin, Dawei
Both Transformer and Graph Neural Networks (GNNs) have been employed in the domain of learning to rank (LTR). However, these approaches adhere to two distinct yet complementary problem formulations: ranking score regression based on query-webpage pai
Externí odkaz:
http://arxiv.org/abs/2409.16590
Autor:
Liu, Jiongnan, Zhu, Yutao, Wang, Shuting, Wei, Xiaochi, Min, Erxue, Lu, Yu, Wang, Shuaiqiang, Yin, Dawei, Dou, Zhicheng
Personalization plays a critical role in numerous language tasks and applications, since users with the same requirements may prefer diverse outputs based on their individual interests. This has led to the development of various personalized approach
Externí odkaz:
http://arxiv.org/abs/2409.11901
Autor:
Yang, Xihong, Jing, Heming, Zhang, Zixing, Wang, Jindong, Niu, Huakang, Wang, Shuaiqiang, Lu, Yu, Wang, Junfeng, Yin, Dawei, Liu, Xinwang, Zhu, En, Lian, Defu, Min, Erxue
Benefiting from the strong reasoning capabilities, Large language models (LLMs) have demonstrated remarkable performance in recommender systems. Various efforts have been made to distill knowledge from LLMs to enhance collaborative models, employing
Externí odkaz:
http://arxiv.org/abs/2408.08231
Autor:
Xiong, Haoyi, Bian, Jiang, Li, Yuchen, Li, Xuhong, Du, Mengnan, Wang, Shuaiqiang, Yin, Dawei, Helal, Sumi
Combining Large Language Models (LLMs) with search engine services marks a significant shift in the field of services computing, opening up new possibilities to enhance how we search for and retrieve information, understand content, and interact with
Externí odkaz:
http://arxiv.org/abs/2407.00128
Autor:
Yang, Xin, Chang, Heng, Lai, Zhijian, Yang, Jinze, Li, Xingrun, Lu, Yu, Wang, Shuaiqiang, Yin, Dawei, Min, Erxue
Cross-Domain Recommendation (CDR) seeks to utilize knowledge from different domains to alleviate the problem of data sparsity in the target recommendation domain, and it has been gaining more attention in recent years. Although there have been notabl
Externí odkaz:
http://arxiv.org/abs/2406.17289
Autor:
Qu, Changle, Dai, Sunhao, Wei, Xiaochi, Cai, Hengyi, Wang, Shuaiqiang, Yin, Dawei, Xu, Jun, Wen, Ji-Rong
Recently, tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems. Despite growing attention and rapid advancements in this field, the existing lite
Externí odkaz:
http://arxiv.org/abs/2405.17935