Autor: |
Sándor Király, Károly Nehéz, Olivér Hornyák |
Jazyk: |
angličtina |
Rok vydání: |
2017 |
Předmět: |
|
Zdroj: |
Research in Learning Technology, Vol 25, Iss 0, Pp 1-16 (2017) |
Druh dokumentu: |
article |
ISSN: |
2156-7077 |
DOI: |
10.25304/rlt.v25.1945 |
Popis: |
Recently, massive open online courses (MOOCs) have been offering a new online approach in the field of distance learning and online education. A typical MOOC course consists of video lectures, reading material and easily accessible tests for students. For a computer programming course, it is important to provide interactive, dynamic, online coding exercises and more complex programming assignments for learners. It is expedient for the students to receive prompt feedback on their coding submissions. Although MOOC automated programme evaluation subsystem is capable of assessing source programme files that are in learning management systems, in MOOC systems there is a grader that is responsible for evaluating students’ assignments with the result that course staff would be required to assess thousands of programmes submitted by the participants of the course without the benefit of an automatic grader. This paper presents a new concept for grading programming submissions of students and improved techniques based on the Java unit testing framework that enables automatic grading of code chunks. Some examples are also given such as the creation of unique exercises by dynamically generating the parameters of the assignment in a MOOC programming course combined with the kind of coding style recognition to teach coding standards. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|