Autor: |
Hiroaki Ogata, Flanagan, Brendan, Kyosuke Takami, Yiling Dai, Ryosuke Nakamoto, Kensuke Takii |
Předmět: |
|
Zdroj: |
Research & Practice in Technology Enhanced Learning; 1/1/2024, Vol. 19, p1-30, 30p |
Abstrakt: |
As artificial intelligence systems increasingly make high-stakes recommendations and decisions automatically in many facets of our lives, the use of explainable artificial intelligence to inform stakeholders about the reasons behind such systems has been gaining much attention in a wide range of fields, including education. Also, in the field of education there has been a long history of research into self-explanation, where students explain the process of their answers. This has been recognized as a beneficial intervention to promote metacognitive skills, however, there is also unexplored potential to gain insight into the problems that learners experience due to inadequate prerequisite knowledge and skills that are required, or in the process of their application to the task at hand. While this aspect of self-explanation has been of interest to teachers, there is little research into the use of such information to inform educational AI systems. In this paper, we propose a system in which both students and the AI system explain to each other their reasons behind decisions that were made, such as: self-explanation of student cognition during the answering process, and explanation of recommendations based on internal mechanizes and other abstract representations of model algorithms. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|