Audio-based Granularity-adapted Emotion Classification

Autor: Søren Holdt Jensen, Sven Ewan Shepstone, Zheng-Hua Tan
Jazyk: angličtina
Rok vydání: 2018
Předmět:
Zdroj: Shepstone, S E, Tan, Z-H & Jensen, S H 2018, ' Audio-based Granularity-adapted Emotion Classification ', IEEE Transactions on Affective Computing, vol. 9, no. 2, pp. 176-190 . https://doi.org/10.1109/TAFFC.2016.2598741
DOI: 10.1109/TAFFC.2016.2598741
Popis: This paper introduces a novel framework for combining the strengths of machine-based and human-based emotion classification. Peoples' ability to tell similar emotions apart is known as emotional granularity, which can be high or low, and is measurable. This paper proposes granularity-Adapted classification that can be used as a front-end to drive a recommender, based on emotions from speech. In this context, incorrectly predicted peoples' emotions could lead to poor recommendations, reducing user satisfaction. Instead of identifying a single emotion class, an adapted class is proposed, and is an aggregate of underlying emotion classes chosen based on granularity. In the recommendation context, the adapted class maps to a larger region in valence-Arousal space, from which a list of potentially more similar content items is drawn, and recommended to the user. To determine the effectiveness of adapted classes, we measured the emotional granularity of subjects, and for each subject, used their pairwise similarity judgments of emotion to compare the effectiveness of adapted classes versus single emotion classes taken from a baseline system. A customized Euclidean-based similarity metric is used to measure the relative proximity of emotion classes. Results show that granularity-Adapted classification can improve the potential similarity by up to 9.6 percent.
Databáze: OpenAIRE