Utilizing Deep Learning Towards Multi-Modal Bio-Sensing and Vision-Based Affective Computing
Autor: | Terrence J. Sejnowski, Tzyy-Ping Jung, Siddharth Siddharth |
---|---|
Rok vydání: | 2022 |
Předmět: |
Signal Processing (eess.SP)
FOS: Computer and information sciences Computer Science - Machine Learning Computer science Computer Science - Human-Computer Interaction Machine Learning (stat.ML) 02 engineering and technology Machine learning computer.software_genre Machine Learning (cs.LG) Human-Computer Interaction (cs.HC) 03 medical and health sciences 0302 clinical medicine Statistics - Machine Learning FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering Electrical Engineering and Systems Science - Signal Processing Affective computing Set (psychology) Modalities Modality (human–computer interaction) business.industry Deep learning Object detection Human-Computer Interaction Salient 020201 artificial intelligence & image processing Artificial intelligence Transfer of learning business computer 030217 neurology & neurosurgery Software |
Zdroj: | IEEE Transactions on Affective Computing. 13:96-107 |
ISSN: | 2371-9850 |
DOI: | 10.1109/taffc.2019.2916015 |
Popis: | In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion/valence/arousal/liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states. Accepted for publication in IEEE Transactions on Affective Computing. This version on the arXiv is the updated version of the same manuscript |
Databáze: | OpenAIRE |
Externí odkaz: |