Generative AI to Generate Test Data Generators

Autor: Baudry, Benoit, Etemadi, Khashayar, Fang, Sen, Gamage, Yogya, Liu, Yi, Liu, Yuxin, Monperrus, Martin, Ron, Javier, Silva, André, Tiwari, Deepika
Rok vydání: 2024
Předmět:
Zdroj: IEEE Software, 2024
Druh dokumentu: Working Paper
DOI: 10.1109/MS.2024.3418570
Popis: Generating fake data is an essential dimension of modern software testing, as demonstrated by the number and significance of data faking libraries. Yet, developers of faking libraries cannot keep up with the wide range of data to be generated for different natural languages and domains. In this paper, we assess the ability of generative AI for generating test data in different domains. We design three types of prompts for Large Language Models (LLMs), which perform test data generation tasks at different levels of integrability: 1) raw test data generation, 2) synthesizing programs in a specific language that generate useful test data, and 3) producing programs that use state-of-the-art faker libraries. We evaluate our approach by prompting LLMs to generate test data for 11 domains. The results show that LLMs can successfully generate realistic test data generators in a wide range of domains at all three levels of integrability.
Databáze: arXiv