Automated Unit Test Improvement using Large Language Models at Meta

Autor: Alshahwan, Nadia, Chheda, Jubin, Finegenova, Anastasia, Gokkaya, Beliz, Harman, Mark, Harper, Inna, Marginean, Alexandru, Sengupta, Shubho, Wang, Eddy
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination. We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms. In an evaluation on Reels and Stories products for Instagram, 75% of TestGen-LLM's test cases built correctly, 57% passed reliably, and 25% increased coverage. During Meta's Instagram and Facebook test-a-thons, it improved 11.5% of all classes to which it was applied, with 73% of its recommendations being accepted for production deployment by Meta software engineers. We believe this is the first report on industrial scale deployment of LLM-generated code backed by such assurances of code improvement.
Comment: 12 pages, 8 figures, 32nd ACM Symposium on the Foundations of Software Engineering (FSE 24)
Databáze: arXiv