Autor: |
Manh-Hung Ha, Duc-Chinh Nguyen, Long Quang Chan, Oscal T.C. Chen |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, Vol 11, Iss 4 (2024) |
Druh dokumentu: |
article |
ISSN: |
2410-0218 |
DOI: |
10.4108/eetinis.v11i4.4734 |
Popis: |
It is difficult to determine whether a person is depressed due to the symptoms of depression not being apparent. However, the voice can be one of the ways in which we can acknowledge signs of depression. Understanding human emotions in natural language plays a crucial role for intelligent and sophisticated applications. This study proposes deep learning architecture to recognize the emotions of the speaker via audio signals, which can help diagnose patients who are depressed or prone to depression, so that treatment and prevention can be started as soon as possible. Specifically, Mel-frequency cepstral coefficients (MFCC) and Short Time Fourier Transform (STFT) are adopted to extract features from the audio signal. The multiple streams of the proposed DNNs model, including CNN-LSTM based on an attention mechanism, are discussed within this research. Leveraging a pretrained model, the proposed experimental results yield an accuracy rate of 93.2% on the EmoDB dataset. Further optimization remains a potential avenue for future development. It is hoped that this research will contribute to potential application in the fields of medical treatment and personal well-being. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|