Towards a Perceptual Loss: Using a Neural Network Codec Approximation as a Loss for Generative Audio Models
Autor: | Ishwarya Ananthabhotla, Joseph A. Paradiso, Sebastian Ewert |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Artificial neural network
Process (engineering) Computer science Speech recognition media_common.quotation_subject 05 social sciences 010501 environmental sciences 01 natural sciences Speech enhancement Generative model 0502 economics and business Source separation Codec 050207 economics Function (engineering) 0105 earth and related environmental sciences media_common |
Zdroj: | MIT web domain ACM Multimedia |
Popis: | © 2019 Association for Computing Machinery. Generative audio models based on neural networks have led to considerable improvements across fields including speech enhancement, source separation, and text-to-speech synthesis. These systems are typically trained in a supervised fashion using simple element-wise ℓ1 or ℓ2 losses. However, because they do not capture properties of the human auditory system, such losses encourage modelling perceptually meaningless aspects of the output, wasting capacity and limiting performance. Additionally, while adversarial models have been employed to encourage outputs that are statistically indistinguishable from ground truth and have resulted in improvements in this regard, such losses do not need to explicitly model perception as their task; furthermore, training adversarial networks remains an unstable and slow process. In this work, we investigate an idea fundamentally rooted in psychoacoustics. We train a neural network to emulate an MP3 codec as a differentiable function. Feeding the output of a generative model through this MP3 function, we remove signal components that are perceptually irrelevant before computing a loss. To further stabilize gradient propagation, we employ intermediate layer outputs to define our loss, as found useful in image domain methods. Our experiments using an autoencoding task show an improvement over standard losses in listening tests, indicating the potential of psychoacoustically motivated models for audio generation. |
Databáze: | OpenAIRE |
Externí odkaz: |