Analysis of Programming Assessments — Building an Open Repository for Measuring Competencies

Autor: Torsten Brinda, Mike Barkmin
Rok vydání: 2020
Předmět:
Zdroj: Koli Calling
DOI: 10.1145/3428029.3428039
Popis: Within different approaches and aims to teach programming, context-specific languages are used which might support different paradigms. Therefore, we are developing a framework for modeling programming competencies regardless of the used language or paradigm. In this paper, we present an open repository for measuring competencies to support our theoretical model. Our goal is to make use of already existing assessments for programming by evaluating their quality and fit to our competency framework. We conducted a systematic literature review to find assessments present in the ACM DL, develop a scheme for evaluating the quality of the assessments following three criteria (objectivity, reliability, and validity) and a scheme for evaluating their fit to the competency framework. An in-depth analysis of 13 assessments showed that all fit to our competency framework with an average coverage of 39% of all concepts. Regarding the quality of the assessments, three reported the reliability by evaluating Cronbach’s alpha and five the validity by using different methods. To expand our open repository and to improve our framework we plan a five-step program: analyze more, develop a guide, fill gaps, specialize and replicate assessments. We hope that providing this framework will foster the development of competency models in the field of programming.
Databáze: OpenAIRE