Zobrazeno 1 - 10
of 128
pro vyhledávání: '"LIU Shichun"'
Autor:
Dou, Shihan, Zhang, Jiazheng, Zang, Jianxiang, Tao, Yunbo, Zhou, Weikang, Jia, Haoxiang, Liu, Shichun, Yang, Yuming, Xi, Zhiheng, Wu, Shenxi, Zhang, Shaoqing, Wu, Muling, Lv, Changze, Xiong, Limao, Zhan, Wenyu, Zhang, Lin, Weng, Rongxiang, Wang, Jingang, Cai, Xunliang, Wu, Yueming, Wen, Ming, Zheng, Rui, Ji, Tao, Cao, Yixin, Gui, Tao, Qiu, Xipeng, Zhang, Qi, Huang, Xuanjing
We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs). It can automatically identify the programming lang
Externí odkaz:
http://arxiv.org/abs/2410.23074
Autor:
Zhang, Ming, Huang, Caishuang, Wu, Yilong, Liu, Shichun, Zheng, Huiyuan, Dong, Yurui, Shen, Yujiong, Dou, Shihan, Zhao, Jun, Ye, Junjie, Zhang, Qi, Gui, Tao, Huang, Xuanjing
Task-oriented dialogue (TOD) systems aim to efficiently handle task-oriented conversations, including information collection. How to utilize TOD accurately, efficiently and effectively for information collection has always been a critical and challen
Externí odkaz:
http://arxiv.org/abs/2407.21693
Autor:
He, Wei, Liu, Shichun, Zhao, Jun, Ding, Yiwen, Lu, Yi, Xi, Zhiheng, Gui, Tao, Zhang, Qi, Huang, Xuanjing
Large language models (LLMs) have shown promising abilities of in-context learning (ICL), adapting swiftly to new tasks with only few-shot demonstrations. However, current few-shot methods heavily depend on high-quality, query-specific demos, which a
Externí odkaz:
http://arxiv.org/abs/2404.00884
Autor:
Xi, Zhiheng, Chen, Wenxiang, Hong, Boyang, Jin, Senjie, Zheng, Rui, He, Wei, Ding, Yiwen, Liu, Shichun, Guo, Xin, Wang, Junzhe, Guo, Honglin, Shen, Wei, Fan, Xiaoran, Zhou, Yuhao, Dou, Shihan, Wang, Xiao, Zhang, Xinbo, Sun, Peng, Gui, Tao, Zhang, Qi, Huang, Xuanjing
In this paper, we propose R$^3$: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models. The core challe
Externí odkaz:
http://arxiv.org/abs/2402.05808
Autor:
Zhang, Yue, Zhang, Ming, Yuan, Haipeng, Liu, Shichun, Shi, Yongyao, Gui, Tao, Zhang, Qi, Huang, Xuanjing
Recently, the evaluation of Large Language Models has emerged as a popular area of research. The three crucial questions for LLM evaluation are ``what, where, and how to evaluate''. However, the existing research mainly focuses on the first two quest
Externí odkaz:
http://arxiv.org/abs/2312.07398
Autor:
Ye, Junjie, Chen, Xuanting, Xu, Nuo, Zu, Can, Shao, Zekai, Liu, Shichun, Cui, Yuhan, Zhou, Zeyang, Gong, Chao, Shen, Yang, Zhou, Jie, Chen, Siming, Gui, Tao, Zhang, Qi, Huang, Xuanjing
GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities. However, despite the abundance of research on the difference in capabiliti
Externí odkaz:
http://arxiv.org/abs/2303.10420
Autor:
Zhao, Mingwei, Yan, Xiaowei, Zhang, Liyuan, Yan, Ruoqin, Liu, Shichun, Ma, Zhenfeng, Dai, Caili
Publikováno v:
In Geoenergy Science and Engineering June 2024 237
Publikováno v:
In Journal of Materials Research and Technology May-June 2024 30:1786-1794
Publikováno v:
In Colloids and Surfaces A: Physicochemical and Engineering Aspects 5 October 2023 674
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.