Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Harper, Inna"'
Autor:
Alshahwan, Nadia, Blasi, Arianna, Bojarczuk, Kinga, Ciancone, Andrea, Gucevska, Natalija, Harman, Mark, Schellaert, Simon, Harper, Inna, Jia, Yue, Królikowski, Michał, Lewis, Will, Martac, Dragos, Rojas, Rubmary, Ustiuzhanina, Kate
This paper reports the results of the deployment of Rich-State Simulated Populations at Meta for both automated and manual testing. We use simulated users (aka test users) to mimic user interactions and acquire state in much the same way that real us
Externí odkaz:
http://arxiv.org/abs/2403.15374
Autor:
Alshahwan, Nadia, Chheda, Jubin, Finegenova, Anastasia, Gokkaya, Beliz, Harman, Mark, Harper, Inna, Marginean, Alexandru, Sengupta, Shubho, Wang, Eddy
This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the
Externí odkaz:
http://arxiv.org/abs/2402.09171
Autor:
Alshahwan, Nadia, Harman, Mark, Harper, Inna, Marginean, Alexandru, Sengupta, Shubho, Wang, Eddy
In this paper we address the following question: How can we use Large Language Models (LLMs) to improve code independently of a human, while ensuring that the improved code - does not regress the properties of the original code? - improves the origin
Externí odkaz:
http://arxiv.org/abs/2402.04380