Speech Enhancement via Deep Spectrum Image Translation Network
Autor: | Ata Jodeiri, Hamidreza Baradaran Kashani, Iman Sarraf Rezaei, Mohammad Mohsen Goodarzi |
---|---|
Rok vydání: | 2019 |
Předmět: |
Hearing aid
FOS: Computer and information sciences Sound (cs.SD) Computer science medicine.medical_treatment Speech recognition 020206 networking & telecommunications 02 engineering and technology Intelligibility (communication) Computer Science - Sound Background noise Speech enhancement 030507 speech-language pathology & audiology 03 medical and health sciences Audio and Speech Processing (eess.AS) Cochlear implant 0202 electrical engineering electronic engineering information engineering medicine FOS: Electrical engineering electronic engineering information engineering Image translation 0305 other medical science Encoder PESQ Electrical Engineering and Systems Science - Audio and Speech Processing |
DOI: | 10.48550/arxiv.1911.01902 |
Popis: | Quality and intelligibility of speech signals are degraded under additive background noise which is a critical problem for hearing aid and cochlear implant users. Motivated to address this problem, we propose a novel speech enhancement approach using a deep spectrum image translation network. To this end, we suggest a new architecture, called VGG19-UNet, where a deep fully convolutional network known as VGG19 is embedded at the encoder part of an image-to-image translation network, i.e. U-Net. Moreover, we propose a perceptually-modified version of the spectrum image that is represented in Mel frequency and power-law non-linearity amplitude domains, representing good approximations of human auditory perception model. By conducting experiments on a real challenge in speech enhancement, i.e. unseen noise environments, we show that the proposed approach outperforms other enhancement methods in terms of both quality and intelligibility measures, represented by PESQ and ESTOI, respectively. Comment: Accepted at ICBME 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |