Zobrazeno 1 - 10
of 65
pro vyhledávání: '"Wang Yuanchun"'
Autor:
Yu, Jifan, Zhang, Zheyuan, Zhang-li, Daniel, Tu, Shangqing, Hao, Zhanxin, Li, Rui Miao, Li, Haoxuan, Wang, Yuanchun, Li, Hanming, Gong, Linlu, Cao, Jie, Lin, Jiayin, Zhou, Jinchang, Qin, Fei, Wang, Haohua, Jiang, Jianxiao, Deng, Lijun, Zhan, Yisi, Xiao, Chaojun, Dai, Xusheng, Yan, Xuan, Lin, Nianyi, Zhang, Nan, Ni, Ruixin, Dang, Yang, Hou, Lei, Zhang, Yu, Han, Xu, Li, Manli, Li, Juanzi, Liu, Zhiyuan, Liu, Huiqin, Sun, Maosong
Since the first instances of online education, where courses were uploaded to accessible and shared online platforms, this form of scaling the dissemination of human knowledge to reach a broader audience has sparked extensive discussion and widesprea
Externí odkaz:
http://arxiv.org/abs/2409.03512
Autor:
Tu, Shangqing, Wang, Yuanchun, Yu, Jifan, Xie, Yuyang, Shi, Yaran, Wang, Xiaozhi, Zhang, Jing, Hou, Lei, Li, Juanzi
Large language models have achieved remarkable success on general NLP tasks, but they may fall short for domain-specific problems. Recently, various Retrieval-Augmented Large Language Models (RALLMs) are proposed to address this shortcoming. However,
Externí odkaz:
http://arxiv.org/abs/2406.11681
Autor:
Wang, Yuanchun, Yu, Jifan, Yao, Zijun, Zhang, Jing, Xie, Yuyang, Tu, Shangqing, Fu, Yiyang, Feng, Youhe, Zhang, Jinkai, Zhang, Jingyao, Huang, Bowen, Li, Yuanyao, Yuan, Huihui, Hou, Lei, Li, Juanzi, Tang, Jie
Applying large language models (LLMs) for academic API usage shows promise in reducing researchers' academic information seeking efforts. However, current LLM API-using methods struggle with complex API coupling commonly encountered in academic queri
Externí odkaz:
http://arxiv.org/abs/2405.15165
Autor:
Tan, Shicheng, Tam, Weng Lam, Wang, Yuanchun, Gong, Wenwen, Yang, Yang, Tang, Hongyin, He, Keqing, Liu, Jiahao, Wang, Jingang, Zhao, Shu, Zhang, Peng, Tang, Jie
Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices. However, the deployment of knowledge distillation
Externí odkaz:
http://arxiv.org/abs/2306.06629
Autor:
Tan, Shicheng, Tam, Weng Lam, Wang, Yuanchun, Gong, Wenwen, Zhao, Shu, Zhang, Peng, Tang, Jie
The large scale of pre-trained language models poses a challenge for their deployment on various devices, with a growing emphasis on methods to compress these models, particularly knowledge distillation. However, current knowledge distillation method
Externí odkaz:
http://arxiv.org/abs/2306.06625
Publikováno v:
In Knowledge-Based Systems 8 April 2024 289
Autor:
Shao, Zhonghui, Zhang, Jing, Li, Haoyang, Huang, Xinmei, Zhou, Chao, Wang, Yuanchun, Gong, Jibing, Li, Cuiping, Chen, Hong
Publikováno v:
In AI Open 2024 5:94-103
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.