Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Jie-Lin Qiu"'
Publikováno v:
IEEE Transactions on Cognitive and Developmental Systems. 14:715-729
Multimodal signals are powerful for emotion recognition since they can represent emotions comprehensively. In this paper, we compare the recognition performance and robustness of two multimodal emotion recognition models: deep canonical correlation a
Publikováno v:
EMBC
People generally agree that emotion processing differs between male and female. However, current hypothesis of sex differences needs more objective evidence and quantitative assessment. In this paper, we investigate the sex difference in classifying
Publikováno v:
BIBM
Web of Science
Web of Science
Emotion is a subjective, conscious experience when people face different kinds of stimuli. In this paper, we propose a new model, Correlated Attention Network (CAN), to make multimodal emotion recognition. Correlated Attention Network is an extension
Autor:
Jie-Lin Qiu, Wei-Ye Zhao
Publikováno v:
ICCI*CC
Web of Science
Web of Science
Emotion is a subjective, conscious experience when people facing internal or external stimuli. This paper addresses the problem that affective computing is difficult to be put into real-world practical fields intuitively, such as emotion disease diag
Publikováno v:
Brain Informatics ISBN: 9783030055868
BI
BI
This paper addresses the problem that emotional computing is difficult to be put into real practical fields intuitively, such as medical disease diagnosis and so on, due to poor direct understanding of physiological signals. In view of the fact that
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::93201e446f60b88889d2596ac4055654
https://doi.org/10.1007/978-3-030-05587-5_1
https://doi.org/10.1007/978-3-030-05587-5_1
Publikováno v:
Neural Information Processing ISBN: 9783030042202
ICONIP (5)
ICONIP (5)
Emotion is a subjective, conscious experience when people face different kinds of stimuli. In this paper, we adopt Deep Canonical Correlation Analysis (DCCA) for high-level coordinated representation to make feature extraction from EEG and eye moveme
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::46ff3c87836098ca6ff621fbe7c6eeee
https://doi.org/10.1007/978-3-030-04221-9_20
https://doi.org/10.1007/978-3-030-04221-9_20