Zobrazeno 1 - 10
of 413
pro vyhledávání: '"Chen, Taolue"'
Autor:
Yang, Guang, Zhou, Yu, Cheng, Wei, Zhang, Xiangyu, Chen, Xiang, Zhuo, Terry Yue, Liu, Ke, Zhou, Xin, Lo, David, Chen, Taolue
The widespread use of Large Language Models (LLMs) in software engineering has intensified the need for improved model and resource efficiency. In particular, for neural code generation, LLMs are used to translate function/method signature and DocStr
Externí odkaz:
http://arxiv.org/abs/2410.22793
Autor:
Li, Zenan, Huang, Yunpeng, Li, Zhaoyu, Yao, Yuan, Xu, Jingwei, Chen, Taolue, Ma, Xiaoxing, Lu, Jian
Neuro-symbolic systems combine the abilities of neural perception and logical reasoning. However, end-to-end learning of neuro-symbolic systems is still an unsolved challenge. This paper proposes a natural framework that fuses neural network training
Externí odkaz:
http://arxiv.org/abs/2410.20957
Autor:
Gao, Hao, Wang, Jingyue, Fang, Wenyang, Xu, Jingwei, Huang, Yunpeng, Chen, Taolue, Ma, Xiaoxing
Autonomous Driving Systems (ADS) require diverse and safety-critical traffic scenarios for effective training and testing, but the existing data generation methods struggle to provide flexibility and scalability. We propose LASER, a novel frame-work
Externí odkaz:
http://arxiv.org/abs/2410.16197
Code Language Models (CLMs), particularly those leveraging deep learning, have achieved significant success in code intelligence domain. However, the issue of security, particularly backdoor attacks, is often overlooked in this process. The previous
Externí odkaz:
http://arxiv.org/abs/2407.08956
Recent studies in neuro-symbolic learning have explored the integration of logical knowledge into deep learning via encoding logical constraints as an additional loss function. However, existing approaches tend to vacuously satisfy logical constraint
Externí odkaz:
http://arxiv.org/abs/2403.00329
Neuro-symbolic learning generally consists of two separated worlds, i.e., neural network training and symbolic constraint solving, whose success hinges on symbol grounding, a fundamental problem in AI. This paper presents a novel, softened symbol gro
Externí odkaz:
http://arxiv.org/abs/2403.00323
Timing side-channel attacks exploit secret-dependent execution time to fully or partially recover secrets of cryptographic implementations, posing a severe threat to software security. Constant-time programming discipline is an effective software-bas
Externí odkaz:
http://arxiv.org/abs/2402.13506
Large Language Models (LLMs) have demonstrated remarkable potential in code generation. The integration of Chain of Thought (CoT) reasoning can further boost their performance. However, current CoT methods often require manual writing or LLMs with ov
Externí odkaz:
http://arxiv.org/abs/2312.05562
Autor:
Huang, Yunpeng, Xu, Jingwei, Lai, Junyu, Jiang, Zixu, Chen, Taolue, Li, Zenan, Yao, Yuan, Ma, Xiaoxing, Yang, Lijuan, Chen, Hao, Li, Shupeng, Zhao, Penghao
Transformer-based Large Language Models (LLMs) have been applied in diverse areas such as knowledge bases, human interfaces, and dynamic agents, and marking a stride towards achieving Artificial General Intelligence (AGI). However, current LLMs are p
Externí odkaz:
http://arxiv.org/abs/2311.12351
Context: Pre-trained models (PTMs) have demonstrated significant potential in automatic code translation. However, the vulnerability of these models in translation tasks, particularly in terms of syntax, has not been extensively investigated. Objecti
Externí odkaz:
http://arxiv.org/abs/2310.18587