TERMINATOR: Better Automated UI Test Case Prioritization
Autor: | Kyle Patrick, Fahmid Morshed Fahid, Tim Menzies, Snehit Cherian, Gregg Rothermel, Zhe Yu |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Black box (phreaking) Computer Science - Machine Learning Source code business.industry Computer science media_common.quotation_subject Software development 020207 software engineering 02 engineering and technology Graphical user interface testing Microservices Machine Learning (cs.LG) Software Engineering (cs.SE) Computer Science - Software Engineering Test case 020204 information systems 0202 electrical engineering electronic engineering information engineering Code (cryptography) Overhead (computing) Software engineering business media_common |
Zdroj: | ESEC/SIGSOFT FSE |
Popis: | Automated UI testing is an important component of the continuous integration process of software development. A modern web-based UI is an amalgam of reports from dozens of microservices written by multiple teams. Queries on a page that opens up another will fail if any of that page's microservices fails. As a result, the overall cost for automated UI testing is high since the UI elements cannot be tested in isolation. For example, the entire automated UI testing suite at LexisNexis takes around 30 hours (3-5 hours on the cloud) to execute, which slows down the continuous integration process. To mitigate this problem and give developers faster feedback on their code, test case prioritization techniques are used to reorder the automated UI test cases so that more failures can be detected earlier. Given that much of the automated UI testing is "black box" in nature, very little information (only the test case descriptions and testing results) can be utilized to prioritize these automated UI test cases. Hence, this paper evaluates 17 "black box" test case prioritization approaches that do not rely on source code information. Among these, we propose a novel TCP approach, that dynamically re-prioritizes the test cases when new failures are detected, by applying and adapting a state of the art framework from the total recall problem. Experimental results on LexisNexis automated UI testing data show that our new approach (which we call TERMINATOR), outperformed prior state of the art approaches in terms of failure detection rates with negligible CPU overhead. 10+2 pages, 4 figures, 3 tables, ESEC/FSE 2019 industry track |
Databáze: | OpenAIRE |
Externí odkaz: |