Speech emotion recognition using affective saliency
Autor: | Polychronis Koutsakis, Alexandros Potamianos, Arodami Chorianopoulou |
---|---|
Jazyk: | angličtina |
Předmět: |
Fusion over time
Spoken dialogue systems Computer science media_common.quotation_subject Emotion classification Speech recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Word error rate 02 engineering and technology Anger Affective saliency Affect (psychology) 030507 speech-language pathology & audiology 03 medical and health sciences 0202 electrical engineering electronic engineering information engineering Emotion recognition media_common business.industry 020206 networking & telecommunications Pattern recognition ComputingMethodologies_PATTERNRECOGNITION Artificial intelligence 0305 other medical science business Classifier (UML) |
Zdroj: | INTERSPEECH |
Popis: | Summarization: We investigate an affective saliency approach for speech emotion recognition of spoken dialogue utterances that estimates the amount of emotional information over time. The proposed saliency approach uses a regression model that combines features extracted from the acoustic signal and the posteriors of a segment-level classifier to obtain frame or segment-level ratings. The affective saliency model is trained using a minimum classification error (MCE) criterion that learns the weights by optimizing an objective loss function related to the classification error rate of the emotion recognition system. Affective saliency scores are then used to weight the contribution of frame-level posteriors and/or features to the speech emotion classification decision. The algorithm is evaluated for the task of anger detection on four call-center datasets for two languages, Greek and English, with good results. Παρουσιάστηκε στο: 17th Annual Conference of the International Speech Communication Association |
Databáze: | OpenAIRE |
Externí odkaz: |