Abstrakt: |
Recognition of emotions from multi-modal physiological signals is one among the toughest tasks prevailing amid the research communities. Most existing works have focused on emotion recognition (ER) from single modal signals, which is now ineffective. Certain models considered multiple modalities, but the results obtained are not satisfactory, and there is still a possibility for improvement in accuracy. Therefore, this work introduces a novel and effective mechanism by embedding multiple techniques to achieve the required task. The projected approach includes stages like pre-processing, signal-to-image conversion, feature extraction, feature selection and classification. Each signal modality is separately pre-processed, and the results are provided to a complex dual tree with fast lifting wavelet transform (CTFL-WT) to convert the signals into images. The converted images are sent to the channel attentive squeezenet (CASN) model for feature extraction. The obtained features are then reduced with the help of an adaptive arithmetic optimization algorithm (AAOA). The reduced features are then provided to the hybrid densenet with long short term memory (DLSTM) for accurate labelling. The projected work resulted in the classification of three different emotions such as neutral, stress and amusement. The implementations are performed in the Python platform, and the evaluations are done using the wearable stress and affect detection (WESAD) dataset. In comparison, the proposed work resulted in an overall accuracy value of 99% and an overall F1-score value of 97.84%. [ABSTRACT FROM AUTHOR] |