Dataset-Distillation Generative Model for Speech Emotion Recognition

Autor: Ritter-Gutierrez, Fabian, Huang, Kuan-Po, Wong, Jeremy H. M, Ng, Dianwen, Lee, Hung-yi, Chen, Nancy F., Chng, Eng Siong
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Deep learning models for speech rely on large datasets, presenting computational challenges. Yet, performance hinges on training data size. Dataset Distillation (DD) aims to learn a smaller dataset without much performance degradation when training with it. DD has been investigated in computer vision but not yet in speech. This paper presents the first approach for DD to speech targeting Speech Emotion Recognition on IEMOCAP. We employ Generative Adversarial Networks (GANs) not to mimic real data but to distil key discriminative information of IEMOCAP that is useful for downstream training. The GAN then replaces the original dataset and can sample custom synthetic dataset sizes. It performs comparably when following the original class imbalance but improves performance by 0.3% absolute UAR with balanced classes. It also reduces dataset storage and accelerates downstream training by 95% in both cases and reduces speaker information which could help for a privacy application.
Comment: Accepted at Interspeech 2024
Databáze: arXiv