Deep Speech Inpainting of Time-Frequency Masks
Autor: | Pierre Beckmann, Milos Cernak, Mikolaj Kegler |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Computer Science - Computation and Language Computer science Speech recognition Bandwidth (signal processing) Inpainting Context (language use) Signal Computer Science - Sound Time–frequency analysis Audio and Speech Processing (eess.AS) Feature (computer vision) FOS: Electrical engineering electronic engineering information engineering Computation and Language (cs.CL) PESQ Word (computer architecture) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | INTERSPEECH |
DOI: | 10.21437/interspeech.2020-1532 |
Popis: | Transient loud intrusions, often occurring in noisy environments, can completely overpower speech signal and lead to an inevitable loss of information. While existing algorithms for noise suppression can yield impressive results, their efficacy remains limited for very low signal-to-noise ratios or when parts of the signal are missing. To address these limitations, here we propose an end-to-end framework for speech inpainting, the context-based retrieval of missing or severely distorted parts of time-frequency representation of speech. The framework is based on a convolutional U-Net trained via deep feature losses, obtained using speechVGG, a deep speech feature extractor pre-trained on an auxiliary word classification task. Our evaluation results demonstrate that the proposed framework can recover large portions of missing or distorted time-frequency representation of speech, up to 400 ms and 3.2 kHz in bandwidth. In particular, our approach provided a substantial increase in STOI & PESQ objective metrics of the initially corrupted speech samples. Notably, using deep feature losses to train the framework led to the best results, as compared to conventional approaches. Accepted to InterSpeech2020 |
Databáze: | OpenAIRE |
Externí odkaz: |