AI-Assisted Assessment of Coding Practices in Modern Code Review

Autor: Vijayvergiya, Manushree, Salawa, Małgorzata, Budiselić, Ivan, Zheng, Dan, Lamblin, Pascal, Ivanković, Marko, Carin, Juanjo, Lewko, Mateusz, Andonov, Jovan, Petrović, Goran, Tarlow, Daniel, Maniatis, Petros, Just, René
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1145/3664646.3665664
Popis: Modern code review is a process in which an incremental code contribution made by a code author is reviewed by one or more peers before it is committed to the version control system. An important element of modern code review is verifying that code contributions adhere to best practices. While some of these best practices can be automatically verified, verifying others is commonly left to human reviewers. This paper reports on the development, deployment, and evaluation of AutoCommenter, a system backed by a large language model that automatically learns and enforces coding best practices. We implemented AutoCommenter for four programming languages (C++, Java, Python, and Go) and evaluated its performance and adoption in a large industrial setting. Our evaluation shows that an end-to-end system for learning and enforcing coding best practices is feasible and has a positive impact on the developer workflow. Additionally, this paper reports on the challenges associated with deploying such a system to tens of thousands of developers and the corresponding lessons learned.
Comment: To appear at the ACM International Conference on AI-Powered Software (AIware '24)
Databáze: arXiv