Autor: |
Yang, Yi, Wang, Ze, Tao, Wei, Liu, Xucheng, Jia, Ziyu, Wang, Boyu, Wan, Feng |
Zdroj: |
IEEE Transactions on Affective Computing; Oct-Dec2024, Vol. 15 Issue 4, p2012-2024, 13p |
Abstrakt: |
In electroencephalographic-based (EEG-based) emotion recognition, high non-stationarity and individual differences in EEG signals could lead to significant discrepancies between sessions/subjects, making generalization to a new session/subject very difficult. Most existing domain adaptation (DA) and multi-source domain adaptation (MSDA) techniques aim to mitigate this discrepancy by aligning feature distributions. However, when confronted with many diverse domain distributions, learning domain-invariant features via aligning pairwise feature distributions between domains can be hard or even counterproductive. To address this issue, this article proposes an attention alignment approach to learning abundant domain-invariant features. The motivation is simple: despite individual differences causing significant differences in feature distributions in EEG-based emotion recognition, shared affective cognitive attributes (attention) of spectral and spatial domains can be observed within the same emotion categories. The proposed spectral-spatial attention alignment multi-source domain adaptation (S2A2-MSDA) constructs domain attention to represent affective cognition attributes in spatial and spectral domains and utilizes domain consistent loss to align them between domains. Furthermore, to facilitate discriminative feature learning on the target classes, S2A2-MSDA learns the conditional semantic information of the target domain using a pseudo-labeling method. This algorithm has been validated on the SEED and SEED-IV datasets in cross-session and cross-subject scenarios, respectively. Experimental results demonstrate that S2A2-MSDA outperforms existing representative DA and MSDA methods, achieving state-of-the-art performance. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|