How Important Are Good Method Names in Neural Code Generation? A Model Robustness Perspective.

Autor: Yang, Guang, Zhou, Yu, Yang, Wenhua, Yue, Tao, Chen, Xiang, Chen, Taolue
Předmět:
Zdroj: ACM Transactions on Software Engineering & Methodology; Mar2024, Vol. 33 Issue 3, p1-35, 35p
Abstrakt: Pre-trained code generation models (PCGMs) have been widely applied in neural code generation, which can generate executable code from functional descriptions in natural languages, possibly together with signatures. Despite substantial performance improvement of PCGMs, the role of method names in neural code generation has not been thoroughly investigated. In this article, we study and demonstrate the potential of benefiting from method names to enhance the performance of PCGMs from a model robustness perspective. Specifically, we propose a novel approach, named neuRAl coDe generAtor Robustifier (RADAR). RADAR consists of two components: RADAR -Attack and RADAR -Defense. The former attacks a PCGM by generating adversarial method names as part of the input, which are semantic and visual similar to the original input but may trick the PCGM to generate completely unrelated code snippets. As a countermeasure to such attacks, RADAR -Defense synthesizes a new method name from the functional description and supplies it to the PCGM. Evaluation results show that RADAR -Attack can reduce the CodeBLEU of generated code by 19.72% to 38.74% in three state-of-the-art PCGMs (i.e., CodeGPT, PLBART, and CodeT5) in the fine-tuning code generation task and reduce the Pass@1 of generated code by 32.28% to 44.42% in three state-of-the-art PCGMs (i.e., Replit, CodeGen, and CodeT5+) in the zero-shot code generation task. Moreover, RADAR -Defense is able to reinstate the performance of PCGMs with synthesized method names. These results highlight the importance of good method names in neural code generation and implicate the benefits of studying model robustness in software engineering. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index