Zobrazeno 1 - 10
of 3 108
pro vyhledávání: '"Hu, Xing"'
Autor:
Han, Husheng, Zheng, Xinyao, Wen, Yuanbo, Hao, Yifan, Feng, Erhu, Liang, Ling, Mu, Jianan, Li, Xiaqing, Ma, Tianyun, Jin, Pengwei, Song, Xinkai, Du, Zidong, Guo, Qi, Hu, Xing
Heterogeneous collaborative computing with NPU and CPU has received widespread attention due to its substantial performance benefits. To ensure data confidentiality and integrity during computing, Trusted Execution Environments (TEE) is considered a
Externí odkaz:
http://arxiv.org/abs/2407.08903
Autor:
Wu, Yutong, Huang, Di, Shi, Wenxuan, Wang, Wei, Gao, Lingzhe, Liu, Shihao, Nan, Ziyuan, Yuan, Kaizhao, Zhang, Rui, Zhang, Xishan, Du, Zidong, Guo, Qi, Pu, Yewen, Yin, Dawei, Hu, Xing, Chen, Yunji
Recent advancements in open-source code large language models (LLMs) have demonstrated remarkable coding abilities by fine-tuning on the data generated from powerful closed-source LLMs such as GPT-3.5 and GPT-4 for instruction tuning. This paper expl
Externí odkaz:
http://arxiv.org/abs/2407.05700
Large language models (LLMs) achieve promising results in code generation based on a given natural language description. They have been integrated into open-source projects and commercial products to facilitate daily coding activities. The natural la
Externí odkaz:
http://arxiv.org/abs/2406.19783
The increasing use of Large Language Models (LLMs) in software development has garnered significant attention from researchers assessing the quality of the code they generate. However, much of the research focuses on controlled datasets such as Human
Externí odkaz:
http://arxiv.org/abs/2406.19544
Autor:
Zhao, Zhengyue, Zhang, Xiaoyun, Xu, Kaidi, Hu, Xing, Zhang, Rui, Du, Zidong, Guo, Qi, Chen, Yunji
With the widespread application of Large Language Models (LLMs), it has become a significant concern to ensure their safety and prevent harmful responses. While current safe-alignment methods based on instruction fine-tuning and Reinforcement Learnin
Externí odkaz:
http://arxiv.org/abs/2406.16743
Software vulnerabilities pose significant risks to the security and integrity of software systems. Prior studies have proposed a series of approaches to vulnerability detection using deep learning or pre-trained models. However, there is still a lack
Externí odkaz:
http://arxiv.org/abs/2406.09701
Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, repository-level code generation presents unique challenges, particularly due to the need to utilize information spread across multiple files wi
Externí odkaz:
http://arxiv.org/abs/2406.03283
Autor:
Gao, Haihan, Zhang, Rui, Yi, Qi, Yao, Hantao, Li, Haochen, Guo, Jiaming, Peng, Shaohui, Gao, Yunkai, Wang, QiCheng, Hu, Xing, Wen, Yuanbo, Zhang, Zihao, Du, Zidong, Li, Ling, Guo, Qi, Chen, Yunji
Overfitting in RL has become one of the main obstacles to applications in reinforcement learning(RL). Existing methods do not provide explicit semantic constrain for the feature extractor, hindering the agent from learning a unified cross-domain repr
Externí odkaz:
http://arxiv.org/abs/2406.03250
Autor:
Zhou, Zhizhi, Jiang, Jiahuan, Sun, Yuanyuan, Qin, Qing, Yuan, Sitong, Wang, Xilin, Jiang, Jianhua, Su, Yifeng, Hu, Xing, Liu, Mingying, Yang, Feng
In this study, we successfully developed two-dimensional paper-based analytical devices using a hybrid technique of injection molding and embossing. This innovative approach involves passive or active delivery of molten wax onto a glass substrate thr
Externí odkaz:
http://arxiv.org/abs/2405.21001
Post-training quantization (PTQ) serves as a potent technique to accelerate the inference of large language models (LLMs). Nonetheless, existing works still necessitate a considerable number of floating-point (FP) operations during inference, includi
Externí odkaz:
http://arxiv.org/abs/2405.17849