Multimodal Data Collection Made Easy: The EZ-MMLA Toolkit
Autor: | Bertrand Schneider, Jovin Leong, Javaria Hassan |
---|---|
Rok vydání: | 2021 |
Předmět: |
Data collection
Computer science Data stream mining Multimodal data 05 social sciences 050301 education Frequency data Context (language use) Variety (cybernetics) 03 medical and health sciences 0302 clinical medicine Human–computer interaction Code (cryptography) 0503 education 030217 neurology & neurosurgery Gesture |
Zdroj: | LAK |
DOI: | 10.1145/3448139.3448201 |
Popis: | While Multimodal Learning Analytics (MMLA) is becoming a popular methodology in the LAK community, most educational researchers still rely on traditional instruments for capturing learning processes (e.g., click-stream, log data, self-reports, qualitative observations). MMLA has the potential to complement and enrich traditional measures of learning by providing high frequency data on learners’ behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this paper, we present the EZ-MMLA toolkit. This toolkit was implemented as a website that provides easy access to the latest machine learning algorithms for collecting a variety of data streams from webcams: attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), hand gestures, emotions (from facial expressions and speech), and lower-level computer vision algorithms (e.g., fiducial / color tracking). This toolkit can run from any browser and does not require special hardware or programming experience. We compare this toolkit with traditional methods and describe a case study where the EZ-MMLA toolkit was used in a classroom context. We conclude by discussing other applications of this toolkit, potential limitations, and future steps. |
Databáze: | OpenAIRE |
Externí odkaz: |