Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Chang, Kaiyan"'
Autor:
Chang, Kaiyan, Chen, Zhirong, Zhou, Yunhao, Zhu, Wenlong, wang, kun, Xu, Haobo, Li, Cangyuan, Wang, Mengdi, Liang, Shengwen, Li, Huawei, Han, Yinhe, Wang, Ying
Natural language interfaces have exhibited considerable potential in the automation of Verilog generation derived from high-level specifications through the utilization of large language models, garnering significant attention. Nevertheless, this pap
Externí odkaz:
http://arxiv.org/abs/2407.08473
Autor:
Wang, Chenglong, Zhou, Hang, Chang, Kaiyan, Li, Bei, Mu, Yongyu, Xiao, Tong, Liu, Tongran, Zhu, Jingbo
Alignment training is crucial for enabling large language models (LLMs) to cater to human intentions and preferences. It is typically performed based on two stages with different objectives: instruction-following alignment and human-preference alignm
Externí odkaz:
http://arxiv.org/abs/2406.15178
Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of mo
Externí odkaz:
http://arxiv.org/abs/2404.01077
Autor:
Chang, Kaiyan, Wang, Kun, Yang, Nan, Wang, Ying, Jin, Dantong, Zhu, Wenlong, Chen, Zhirong, Li, Cangyuan, Yan, Hao, Zhou, Yunhao, Zhao, Zhuoliang, Cheng, Yuan, Pan, Yudong, Liu, Yiqi, Wang, Mengdi, Liang, Shengwen, Han, Yinhe, Li, Huawei, Li, Xiaowei
Recent advances in large language models have demonstrated their potential for automated generation of hardware description language (HDL) code from high-level prompts. Researchers have utilized fine-tuning to enhance the ability of these large langu
Externí odkaz:
http://arxiv.org/abs/2403.11202
Autor:
Wang, Chenglong, Zhou, Hang, Chang, Kaiyan, Liu, Tongran, Zhang, Chunliang, Du, Quan, Xiao, Tong, Zhu, Jingbo
Large language models achieve state-of-the-art performance on sequence generation evaluation, but typically have a large number of parameters. This is a computational challenge as presented by applying their evaluation capability at scale. To overcom
Externí odkaz:
http://arxiv.org/abs/2308.04386
Autor:
Chang, Kaiyan, Wang, Ying, Ren, Haimeng, Wang, Mengdi, Liang, Shengwen, Han, Yinhe, Li, Huawei, Li, Xiaowei
As large language models (LLMs) like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction. To estimate the pot
Externí odkaz:
http://arxiv.org/abs/2305.14019
Teamwork is increasingly important in today's society. This paper aims at the problem of team performance evaluation. Through complex network feature extraction, we establishes the passing network and team performance evaluation model. Finally, this
Externí odkaz:
http://arxiv.org/abs/2004.11039
Integrating idle embedded devices into cloud computing is a promising approach to support distributed machine learning. In this paper, we approach to address the data hiding problem in such distributed machine learning systems. For the purpose of the
Externí odkaz:
http://arxiv.org/abs/2004.10968
Publikováno v:
In Journal of Systems Architecture March 2021 114