A Combined CNN Architecture for Speech Emotion Recognition

Autor: Rolinson Begazo, Ana Aguilera, Irvin Dongo, Yudith Cardinale
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Sensors, Vol 24, Iss 17, p 5797 (2024)
Druh dokumentu: article
ISSN: 1424-8220
DOI: 10.3390/s24175797
Popis: Emotion recognition through speech is a technique employed in various scenarios of Human–Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje