Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Min, Marcus J."'
Code Large Language Models (Code LLMs) have excelled at tasks like code completion but often miss deeper semantics such as execution effects and dynamic states. This paper aims to bridge the gap between Code LLMs' reliance on static text data and the
Externí odkaz:
http://arxiv.org/abs/2406.01006
Pre-trained code language models have achieved promising performance in code generation and improved the programming efficiency of human developers. However, their self-refinement capability is typically overlooked by the existing evaluations of code
Externí odkaz:
http://arxiv.org/abs/2403.18746
Autor:
Min, Marcus J., Ding, Yangruibo, Buratti, Luca, Pujar, Saurabh, Kaiser, Gail, Jana, Suman, Ray, Baishakhi
Code Large Language Models (Code LLMs) are being increasingly employed in real-life applications, so evaluating them is critical. While the conventional accuracy evaluates the performance of Code LLMs on a set of individual tasks, their self-consiste
Externí odkaz:
http://arxiv.org/abs/2310.14053