Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Panthaplackel, Sheena"'
To evaluate code large language models (LLMs), research has relied on a few small manually curated benchmarks, such as HumanEval and MBPP, which represent a narrow part of the real-world software domains. In this work, we introduce round-trip correct
Externí odkaz:
http://arxiv.org/abs/2402.08699
Automatically fixing software bugs is a challenging task. While recent work showed that natural language context is useful in guiding bug-fixing models, the approach required prompting developers to provide this context, which was simulated through c
Externí odkaz:
http://arxiv.org/abs/2211.06335
Pretrained language models have been shown to be effective in many software-related generation tasks; however, they are not well-suited for editing tasks as they are not designed to reason about edits. To address this, we propose a novel pretraining
Externí odkaz:
http://arxiv.org/abs/2208.05446
When a software bug is reported, developers engage in a discussion to collaboratively resolve it. While the solution is likely formulated within the discussion, it is often buried in a large amount of text, making it difficult to comprehend and delay
Externí odkaz:
http://arxiv.org/abs/2110.04353
Autor:
Zhang, Jiyang, Panthaplackel, Sheena, Nie, Pengyu, Mooney, Raymond J., Li, Junyi Jessy, Gligoric, Milos
Descriptive code comments are essential for supporting code comprehension and maintenance. We propose the task of automatically generating comments for overriding methods. We formulate a novel framework which accommodates the unique contextual and li
Externí odkaz:
http://arxiv.org/abs/2103.13426
Natural language comments convey key aspects of source code such as implementation, usage, and pre- and post-conditions. Failure to update comments accordingly when the corresponding code is modified introduces inconsistencies, which is known to lead
Externí odkaz:
http://arxiv.org/abs/2010.01625
Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code. In this paper, we argue that common seq2seq models (with a facility to copy single tokens) are
Externí odkaz:
http://arxiv.org/abs/2006.04771
We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. We propose an approach that learns to correlate changes across two distinct language representations, to g
Externí odkaz:
http://arxiv.org/abs/2004.12169
Comments are an integral part of software development; they are natural language descriptions associated with source code elements. Understanding explicit associations can be useful in improving code comprehensibility and maintaining the consistency
Externí odkaz:
http://arxiv.org/abs/1912.06728
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 35:13622-13630
Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code. In this paper, we argue that common seq2seq models (with a facility to copy single tokens) are