Popis: |
The implementation of internationalisation strategies by higher education policymakers has led to a steady increase in English-taught programmes (Wächter & Maiworm 2014). The recognition of English as an economic (Ehrenreich 2010) and academic (Ljosland 2011) lingua franca also applies to Engineering, where the proficient use of English is highly valued (Minsch et al. 2017), and practical skills, such as the writing of technical reports, are expected outcomes from engineering students' university level education (Karras et al., 2015: 8-13). In spite of its importance, the placement testing of writing skills, regrettably, tends to be avoided on the grounds that it is too time-consuming. This is especially true when large numbers of new students are involved. However, advancements in computerized text analysis in the form of Automated Essay Scoring (AES) show great promise in responding to this need. This paper describes the design, results and further development of an AES tool and CEFR-level prediction algorithm that was created and experimentally implemented at a major University of Applied Sciences in Switzerland as part of an online English placement test for first-year engineering students. In line with current research in AES, the algorithm was developed employing a prediction-accuracy pseudo-black box approach (see Vanhove et al. 2019, Yannakoudakis 2013) using a small training corpus of texts with known CEFR levels (N=50). The tool's advantages will also be discussed. Written and run entirely in an R environment (R Core Team 2017, version 3.4.3) using the koRpus package (Michalke 2017) as the tool's workhorse, the user has complete control over the tool's implementation, and can integrate it with other advanced text analyses possible in R (e.g., text mining, word embedding). The algorithm requires a minimum of resources, is simple to use and is cost-effective. As the tool can handle bulk grading of large numbers of texts, it is ideal for placement testing. Further, the AES algorithm is efficient. In its experimental run, the AES tool calculated the CEFR levels of 400 student essays in 10 minutes of run time on an ordinary laptop computer. While gold-standard (human) validation evidence is still required, external validation tests demonstrate the accuracy of the tool. CEFR level prediction patterns were found to be closer to official CEFR levels of a selection of texts than grading by other online AES systems. Finally, further research perspectives and dissemination in the form of a shiny web app and an R package will be presented. |