Hierarchical transformer speech depression detection model research based on Dynamic window and Attention merge

Autor: Xiaoping Yue, Chunna Zhang, Zhijian Wang, Yang Yu, Shengqiang Cong, Yuming Shen, Jinchi Zhao
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: PeerJ Computer Science, Vol 10, p e2348 (2024)
Druh dokumentu: article
ISSN: 2376-5992
DOI: 10.7717/peerj-cs.2348
Popis: Depression Detection of Speech is widely applied due to its ease of acquisition and imbuing with emotion. However, there exist challenges in effectively segmenting and integrating depressed speech segments. Multiple merges can also lead to blurred original information. These problems diminish the effectiveness of existing models. This article proposes a Hierarchical Transformer model for speech depression detection based on dynamic window and attention merge, abbreviated as DWAM-Former. DWAM-Former utilizes a Learnable Speech Split module (LSSM) to effectively separate the phonemes and words within an entire speech segment. Moreover, the Adaptive Attention Merge module (AAM) is introduced to generate representative feature representations for each phoneme and word in the sentence. DWAM-Former also associates the original feature information with the merged features through a Variable-Length Residual module (VL-RM), reducing feature loss caused by multiple mergers. DWAM-Former has achieved highly competitive results in the depression detection dataset DAIC-WOZ. An MF1 score of 0.788 is received in the experiment, representing a 7.5% improvement over previous research.
Databáze: Directory of Open Access Journals