Multi-scenario deep learning for multi-speaker source separation
Autor: | KU Leuven, Jeroen Zegers, Hugo Van hamme |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
Matching (statistics) Computer Science - Machine Learning Computer science business.industry Deep learning Machine Learning (stat.ML) PSI_SPEECH Machine learning computer.software_genre Machine Learning (cs.LG) Work (electrical) Statistics - Machine Learning Source separation Task analysis Artificial intelligence business computer |
Zdroj: | ICASSP |
Popis: | © 2018 IEEE. Research in deep learning for multi-speaker source separation has received a boost in the last years. However, most studies are restricted to mixtures of a specific number of speakers, called a specific scenario. While some works included experiments for different scenarios, research towards combining data of different scenarios or creating a single model for multiple scenarios have been very rare. In this work it is shown that data of a specific scenario is relevant for solving another scenario. Furthermore, it is concluded that a single model, trained on different scenarios is capable of matching performance of scenario specific models. Zegers J., Van hamme H., ''Multi-scenario deep learning for multi-speaker source separation'', 43rd IEEE international conference on acoustics, speech, and signal processing - ICASSP 2018, pp. 5379-5383, April 15-20, 2018, Calgary, Alberta, Canada. ispartof: pages:5379-5383 ispartof: Proceedings ICASSP 2018 vol:2018-April pages:5379-5383 ispartof: IEEE international conference on acoustics, speech, and signal processing - ICASSP 2018 location:Calgary, Alberta, Canada date:15 Apr - 20 Apr 2018 status: published |
Databáze: | OpenAIRE |
Externí odkaz: |