Zobrazeno 1 - 10
of 311
pro vyhledávání: '"Wu, Chuhan"'
Autor:
Liu, Weiwen, Huang, Xu, Zeng, Xingshan, Hao, Xinlong, Yu, Shuai, Li, Dexun, Wang, Shuai, Gan, Weinan, Liu, Zhengying, Yu, Yuanqing, Wang, Zezhong, Wang, Yuxian, Ning, Wu, Hou, Yutai, Wang, Bin, Wu, Chuhan, Wang, Xinzhi, Liu, Yong, Wang, Yasheng, Tang, Duyu, Tu, Dandan, Shang, Lifeng, Jiang, Xin, Tang, Ruiming, Lian, Defu, Liu, Qun, Chen, Enhong
Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability. However, real function-calling data is quite challenging to collect and
Externí odkaz:
http://arxiv.org/abs/2409.00920
Autor:
Wu, Chuhan, Tang, Ruiming
Guided by the belief of the scaling law, large language models (LLMs) have achieved impressive performance in recent years. However, scaling law only gives a qualitative estimation of loss, which is influenced by various factors such as model archite
Externí odkaz:
http://arxiv.org/abs/2408.09895
Autor:
Chen, Bo, Dai, Xinyi, Guo, Huifeng, Guo, Wei, Liu, Weiwen, Liu, Yong, Qin, Jiarui, Tang, Ruiming, Wang, Yichao, Wu, Chuhan, Wu, Yaxiong, Zhang, Hao
Recommender systems (RS) are vital for managing information overload and delivering personalized content, responding to users' diverse information needs. The emergence of large language models (LLMs) offers a new horizon for redefining recommender sy
Externí odkaz:
http://arxiv.org/abs/2407.10081
Autor:
Yin, Mingjia, Wu, Chuhan, Wang, Yufei, Wang, Hao, Guo, Wei, Wang, Yasheng, Liu, Yong, Tang, Ruiming, Lian, Defu, Chen, Enhong
Data is the cornerstone of large language models (LLMs), but not all data is useful for model learning. Carefully selected data can better elicit the capabilities of LLMs with much less computational overhead. Most methods concentrate on evaluating t
Externí odkaz:
http://arxiv.org/abs/2407.06645
Time series prediction is a fundamental problem in scientific exploration and artificial intelligence (AI) technologies have substantially bolstered its efficiency and accuracy. A well-established paradigm in AI-driven time series prediction is injec
Externí odkaz:
http://arxiv.org/abs/2405.06986
Personalized recommendation stands as a ubiquitous channel for users to explore information or items aligned with their interests. Nevertheless, prevailing recommendation models predominantly rely on unique IDs and categorical features for user-item
Externí odkaz:
http://arxiv.org/abs/2405.06927
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks, thereby enhancing the commercial value of their intellectual property (IP). To protect this IP, model owners typically allow user access only in
Externí odkaz:
http://arxiv.org/abs/2405.02365
Autor:
Sun, Peijie, Wang, Yifan, Zhang, Min, Wu, Chuhan, Fang, Yan, Zhu, Hong, Fang, Yuan, Wang, Meng
With the surge in mobile gaming, accurately predicting user spending on newly downloaded games has become paramount for maximizing revenue. However, the inherently unpredictable nature of user behavior poses significant challenges in this endeavor. T
Externí odkaz:
http://arxiv.org/abs/2404.08301
Autor:
Zhang, Wenlin, Wu, Chuhan, Li, Xiangyang, Wang, Yuhao, Dong, Kuicai, Wang, Yichao, Dai, Xinyi, Zhao, Xiangyu, Guo, Huifeng, Tang, Ruiming
Recommender systems aim to predict user interest based on historical behavioral data. They are mainly designed in sequential pipelines, requiring lots of data to train different sub-systems, and are hard to scale to new domains. Recently, Large Langu
Externí odkaz:
http://arxiv.org/abs/2404.00702
Autor:
Xi, Yunjia, Liu, Weiwen, Lin, Jianghao, Wu, Chuhan, Chen, Bo, Tang, Ruiming, Zhang, Weinan, Yu, Yong
The rise of large language models (LLMs) has opened new opportunities in Recommender Systems (RSs) by enhancing user behavior modeling and content understanding. However, current approaches that integrate LLMs into RSs solely utilize either LLM or co
Externí odkaz:
http://arxiv.org/abs/2403.16378