Interpretable Explainability for Face Expression Recognition
Autor: | Shingjergi, K., Iren, Y.D., Klemke, R., Urlings, C.C.J., Böttger, Felix |
---|---|
Přispěvatelé: | Department of Information Science, RS-Research Program Learning and Innovation in Resilient systems (LIRS), RS-Research Program Educational research on activating (online) education (ERA), Department of Technology Enhanced Learning and Innovation, RS-Research Line Technology Enhanced Learning and Innovation (part of ERA program) |
Rok vydání: | 2023 |
Zdroj: | Shingjergi, K, Iren, Y D, Klemke, R, Urlings, C C J & Böttger, F 2023, Interpretable Explainability for Face Expression Recognition . in Heterodox Methods for Interpretable and Efficient Artificial Intelligence, 2022 . Zenodo, The first International Conference on Hybrid Human-Artificial Intelligence, Amsterdam, Netherlands, 13/06/22 . https://doi.org/10.5281/zenodo.7740019 Heterodox Methods for Interpretable and Efficient Artificial Intelligence, 2022 |
DOI: | 10.5281/zenodo.7740019 |
Popis: | Training facial emotion recognition models requires large sets of data and costly annotation processes. Additionally, it is challenging to explain the operation principles and the outcomes of such models in a way that is interpretable and understandable by humans. In this paper, we introduce a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans. Such an approach effectively creates a robust, sustainable, and continuous machine learning training process. Moreover, we present a novel way of providing interpretable explanations for facial emotion recognition using action units as intermediary features and translating them into natural language descriptions of facial expressions of emotions.   |
Databáze: | OpenAIRE |
Externí odkaz: |