Evaluating Large Language Models in Detecting Test Smells
Autor: | Lucas, Keila, Gheyi, Rohit, Soares, Elvys, Ribeiro, Márcio, Machado, Ivan |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Test smells are coding issues that typically arise from inadequate practices, a lack of knowledge about effective testing, or deadline pressures to complete projects. The presence of test smells can negatively impact the maintainability and reliability of software. While there are tools that use advanced static analysis or machine learning techniques to detect test smells, these tools often require effort to be used. This study aims to evaluate the capability of Large Language Models (LLMs) in automatically detecting test smells. We evaluated ChatGPT-4, Mistral Large, and Gemini Advanced using 30 types of test smells across codebases in seven different programming languages collected from the literature. ChatGPT-4 identified 21 types of test smells. Gemini Advanced identified 17 types, while Mistral Large detected 15 types of test smells. Conclusion: The LLMs demonstrated potential as a valuable tool in identifying test smells. Comment: 7 pages, Accepted at Insightful Ideas and Emerging Results (IIER) Track of the Brazilian Symposium on Software Engineering (SBES 2024) |
Databáze: | arXiv |
Externí odkaz: |