Multi-scenario deep learning for multi-speaker source separation

Autor: KU Leuven, Jeroen Zegers, Hugo Van hamme
Jazyk: angličtina
Rok vydání: 2018
Předmět:
Zdroj: ICASSP
Popis: © 2018 IEEE. Research in deep learning for multi-speaker source separation has received a boost in the last years. However, most studies are restricted to mixtures of a specific number of speakers, called a specific scenario. While some works included experiments for different scenarios, research towards combining data of different scenarios or creating a single model for multiple scenarios have been very rare. In this work it is shown that data of a specific scenario is relevant for solving another scenario. Furthermore, it is concluded that a single model, trained on different scenarios is capable of matching performance of scenario specific models. Zegers J., Van hamme H., ''Multi-scenario deep learning for multi-speaker source separation'', 43rd IEEE international conference on acoustics, speech, and signal processing - ICASSP 2018, pp. 5379-5383, April 15-20, 2018, Calgary, Alberta, Canada. ispartof: pages:5379-5383 ispartof: Proceedings ICASSP 2018 vol:2018-April pages:5379-5383 ispartof: IEEE international conference on acoustics, speech, and signal processing - ICASSP 2018 location:Calgary, Alberta, Canada date:15 Apr - 20 Apr 2018 status: published
Databáze: OpenAIRE