The Mind Is a Powerful Place: How Showing Code Comprehensibility Metrics Influences Code Understanding
Autor: | Daniel Graziotin, Andreas Preikschat, Marvin Wyrich, Stefan Wagner |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Source code Computer Science - Programming Languages business.industry Computer science media_common.quotation_subject Software development Static program analysis Software maintenance Software quality Software metric Software Engineering (cs.SE) Computer Science - Software Engineering Computer Science - Computers and Society Software Human–computer interaction Computers and Society (cs.CY) Quality (business) business media_common Programming Languages (cs.PL) |
Zdroj: | ICSE |
Popis: | Static code analysis tools and integrated development environments present developers with quality-related software metrics, some of which describe the understandability of source code. Software metrics influence overarching strategic decisions that impact the future of companies and the prioritization of everyday software development tasks. Several software metrics, however, lack in validation: we just choose to trust that they reflect what they are supposed to measure. Some of them were even shown to not measure the quality aspects they intend to measure. Yet, they influence us through biases in our cognitive-driven actions. In particular, they might anchor us in our decisions. Whether the anchoring effect exists with software metrics has not been studied yet. We conducted a randomized and double-blind experiment to investigate the extent to which a displayed metric value for source code comprehensibility anchors developers in their subjective rating of source code comprehensibility, whether performance is affected by the anchoring effect when working on comprehension tasks, and which individual characteristics might play a role in the anchoring effect. We found that the displayed value of a comprehensibility metric has a significant and large anchoring effect on a developer's code comprehensibility rating. The effect does not seem to affect the time or correctness when working on comprehension questions related to the code snippets under study. Since the anchoring effect is one of the most robust cognitive biases, and we have limited understanding of the consequences of the demonstrated manipulation of developers by non-validated metrics, we call for an increased awareness of the responsibility in code quality reporting and for corresponding tools to be based on scientific evidence. To appear in: Proceedings of the 43rd International Conference on Software Engineering (ICSE '21), Madrid, Spain, 12 pages. 12 pages, 1 figure. Postprint, after peer review |
Databáze: | OpenAIRE |
Externí odkaz: |