Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
Autor: | Sheng-Feng Duan, Xi Lyu, Kuo-Liang Huang |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
050105 experimental psychology
03 medical and health sciences 0302 clinical medicine Empirical research Software Psychology 0501 psychology and cognitive sciences Affective computing affective computing voice-user interface (VUI) General Psychology Original Research emotion analysis business.industry Deep learning 05 social sciences PAD model Expression (mathematics) acoustic features Stratified sampling BF1-990 Sound recording and reproduction Dilemma Artificial intelligence business 030217 neurology & neurosurgery |
Zdroj: | Frontiers in Psychology, Vol 12 (2021) Frontiers in Psychology |
ISSN: | 1664-1078 |
DOI: | 10.3389/fpsyg.2021.664925/full |
Popis: | New types of artificial intelligence products are gradually transferring to voice interaction modes with the demand for intelligent products expanding from communication to recognizing users' emotions and instantaneous feedback. At present, affective acoustic models are constructed through deep learning and abstracted into a mathematical model, making computers learn from data and equipping them with prediction abilities. Although this method can result in accurate predictions, it has a limitation in that it lacks explanatory capability; there is an urgent need for an empirical study of the connection between acoustic features and psychology as the theoretical basis for the adjustment of model parameters. Accordingly, this study focuses on exploring the differences between seven major “acoustic features” and their physical characteristics during voice interaction with the recognition and expression of “gender” and “emotional states of the pleasure-arousal-dominance (PAD) model.” In this study, 31 females and 31 males aged between 21 and 60 were invited using the stratified random sampling method for the audio recording of different emotions. Subsequently, parameter values of acoustic features were extracted using Praat voice software. Finally, parameter values were analyzed using a Two-way ANOVA, mixed-design analysis in SPSS software. Results show that gender and emotional states of the PAD model vary among seven major acoustic features. Moreover, their difference values and rankings also vary. The research conclusions lay a theoretical foundation for AI emotional voice interaction and solve deep learning's current dilemma in emotional recognition and parameter optimization of the emotional synthesis model due to the lack of explanatory power. |
Databáze: | OpenAIRE |
Externí odkaz: |