Zobrazeno 1 - 10
of 150
pro vyhledávání: '"Wang, Shuaiqiang"'
Autor:
Yang, Xihong, Jing, Heming, Zhang, Zixing, Wang, Jindong, Niu, Huakang, Wang, Shuaiqiang, Lu, Yu, Wang, Junfeng, Yin, Dawei, Liu, Xinwang, Zhu, En, Lian, Defu, Min, Erxue
Benefiting from the strong reasoning capabilities, Large language models (LLMs) have demonstrated remarkable performance in recommender systems. Various efforts have been made to distill knowledge from LLMs to enhance collaborative models, employing
Externí odkaz:
http://arxiv.org/abs/2408.08231
Autor:
Xiong, Haoyi, Bian, Jiang, Li, Yuchen, Li, Xuhong, Du, Mengnan, Wang, Shuaiqiang, Yin, Dawei, Helal, Sumi
Combining Large Language Models (LLMs) with search engine services marks a significant shift in the field of services computing, opening up new possibilities to enhance how we search for and retrieve information, understand content, and interact with
Externí odkaz:
http://arxiv.org/abs/2407.00128
Autor:
Yang, Xin, Chang, Heng, Lai, Zhijian, Yang, Jinze, Li, Xingrun, Lu, Yu, Wang, Shuaiqiang, Yin, Dawei, Min, Erxue
Cross-Domain Recommendation (CDR) seeks to utilize knowledge from different domains to alleviate the problem of data sparsity in the target recommendation domain, and it has been gaining more attention in recent years. Although there have been notabl
Externí odkaz:
http://arxiv.org/abs/2406.17289
Autor:
Qu, Changle, Dai, Sunhao, Wei, Xiaochi, Cai, Hengyi, Wang, Shuaiqiang, Yin, Dawei, Xu, Jun, Wen, Ji-Rong
Recently, tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems. Despite growing attention and rapid advancements in this field, the existing lite
Externí odkaz:
http://arxiv.org/abs/2405.17935
Autor:
Qu, Changle, Dai, Sunhao, Wei, Xiaochi, Cai, Hengyi, Wang, Shuaiqiang, Yin, Dawei, Xu, Jun, Wen, Ji-Rong
Recently, integrating external tools with Large Language Models (LLMs) has gained significant attention as an effective strategy to mitigate the limitations inherent in their pre-training data. However, real-world systems often incorporate a wide arr
Externí odkaz:
http://arxiv.org/abs/2405.16089
Autor:
Jia, Pengyue, Liu, Yiding, Li, Xiaopeng, Zhao, Xiangyu, Wang, Yuhao, Du, Yantong, Han, Xiao, Wei, Xuetao, Wang, Shuaiqiang, Yin, Dawei
Worldwide geolocalization aims to locate the precise location at the coordinate level of photos taken anywhere on the Earth. It is very challenging due to 1) the difficulty of capturing subtle location-aware visual semantics, and 2) the heterogeneous
Externí odkaz:
http://arxiv.org/abs/2405.14702
Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks but are constrained by their small context window sizes. Various efforts have been proposed to expand the context window to accommodate even up to 200K input t
Externí odkaz:
http://arxiv.org/abs/2404.05446
Autor:
Zhao, Yukun, Yan, Lingyong, Sun, Weiwei, Xing, Guoliang, Wang, Shuaiqiang, Meng, Chong, Cheng, Zhicong, Ren, Zhaochun, Yin, Dawei
Large language models (LLMs) have shown tremendous success in following user instructions and generating helpful responses. Nevertheless, their robustness is still far from optimal, as they may generate significantly inconsistent responses due to min
Externí odkaz:
http://arxiv.org/abs/2403.14221
Autor:
Zeng, Shenglai, Zhang, Jiankun, He, Pengfei, Xing, Yue, Liu, Yiding, Xu, Han, Ren, Jie, Wang, Shuaiqiang, Yin, Dawei, Chang, Yi, Tang, Jiliang
Retrieval-augmented generation (RAG) is a powerful technique to facilitate language model with proprietary and private data, where data privacy is a pivotal concern. Whereas extensive research has demonstrated the privacy risks of large language mode
Externí odkaz:
http://arxiv.org/abs/2402.16893
Autor:
Lyu, Yougang, Yan, Lingyong, Wang, Shuaiqiang, Shi, Haibo, Yin, Dawei, Ren, Pengjie, Chen, Zhumin, de Rijke, Maarten, Ren, Zhaochun
Despite their success at many natural language processing (NLP) tasks, large language models still struggle to effectively leverage knowledge for knowledge-intensive tasks, manifesting limitations such as generating incomplete, non-factual, or illogi
Externí odkaz:
http://arxiv.org/abs/2402.11176