Zobrazeno 1 - 10
of 482
pro vyhledávání: '"Pham Hung, Viet"'
Autor:
Pham, Hung Viet, Nguyen, Tung Thanh
Traditional defect prediction approaches often use metrics that measure the complexity of the design or implementing code of a software system, such as the number of lines of code in a source file. In this paper, we explore a different approach based
Externí odkaz:
http://arxiv.org/abs/2409.18365
Large Language Models (LLMs) have demonstrated remarkable abilities across various tasks, leveraging advanced reasoning. Yet, they struggle with task-oriented prompts due to a lack of specific prior knowledge of the task answers. The current state-of
Externí odkaz:
http://arxiv.org/abs/2409.16418
Large Language Models (LLMs) have demonstrated impressive performance in software engineering tasks. However, improving their accuracy in generating correct and reliable code remains challenging. Numerous prompt engineering techniques (PETs) have bee
Externí odkaz:
http://arxiv.org/abs/2409.16416
Large Language Models (LLMs) have seen increasing use in various software development tasks, especially in code generation. The most advanced recent methods attempt to incorporate feedback from code execution into prompts to help guide LLMs in genera
Externí odkaz:
http://arxiv.org/abs/2408.11198
Can ChatGPT Support Developers? An Empirical Evaluation of Large Language Models for Code Generation
Large language models (LLMs) have demonstrated notable proficiency in code generation, with numerous prior studies showing their promising capabilities in various development scenarios. However, these studies mainly provide evaluations in research se
Externí odkaz:
http://arxiv.org/abs/2402.11702
Autor:
Mohajer, Mohammad Mahdi, Aleithan, Reem, Harzevili, Nima Shiri, Wei, Moshi, Belle, Alvine Boaye, Pham, Hung Viet, Wang, Song
We introduce SkipAnalyzer, a large language model (LLM)-powered tool for static code analysis. SkipAnalyzer has three components: 1) an LLM-based static bug detector that scans source code and reports specific types of bugs, 2) an LLM-based false-pos
Externí odkaz:
http://arxiv.org/abs/2310.18532
In this work, we set out to conduct the first ground-truth empirical evaluation of state-of-the-art DL fuzzers. Specifically, we first manually created an extensive DL bug benchmark dataset, which includes 627 real-world DL bugs from TensorFlow and P
Externí odkaz:
http://arxiv.org/abs/2310.06912
Recently, many Deep Learning fuzzers have been proposed for testing of DL libraries. However, they either perform unguided input generation (e.g., not considering the relationship between API arguments when generating inputs) or only support a limite
Externí odkaz:
http://arxiv.org/abs/2306.03269
Autor:
Wu, Yi, Jiang, Nan, Pham, Hung Viet, Lutellier, Thibaud, Davis, Jordan, Tan, Lin, Babkin, Petr, Shah, Sameena
Security vulnerability repair is a difficult task that is in dire need of automation. Two groups of techniques have shown promise: (1) large code language models (LLMs) that have been pre-trained on source code for tasks such as code completion, and
Externí odkaz:
http://arxiv.org/abs/2305.18607
Autor:
Xie, Danning, Li, Yitong, Kim, Mijung, Pham, Hung Viet, Tan, Lin, Zhang, Xiangyu, Godfrey, Michael W.
Input constraints are useful for many software development tasks. For example, input constraints of a function enable the generation of valid inputs, i.e., inputs that follow these constraints, to test the function deeper. API functions of deep learn
Externí odkaz:
http://arxiv.org/abs/2109.01002