Deep Recurrent NMF for Speech Separation by Unfolding Iterative Thresholding
Autor: | Scott Wisdom, Thomas Powers, Les Atlas, James W. Pitton |
---|---|
Rok vydání: | 2017 |
Předmět: |
FOS: Computer and information sciences
Network architecture Sound (cs.SD) Optimization problem Computer science business.industry Initialization 020206 networking & telecommunications Pattern recognition Machine Learning (stat.ML) 02 engineering and technology Computer Science - Sound Non-negative matrix factorization Machine Learning (cs.LG) Computer Science - Learning Recurrent neural network ComputingMethodologies_PATTERNRECOGNITION Statistics - Machine Learning Convergence (routing) 0202 electrical engineering electronic engineering information engineering Spectrogram 020201 artificial intelligence & image processing Artificial intelligence business Interpretability |
Zdroj: | WASPAA |
DOI: | 10.48550/arxiv.1709.07124 |
Popis: | In this paper, we propose a novel recurrent neural network architecture for speech separation. This architecture is constructed by unfolding the iterations of a sequential iterative soft-thresholding algorithm (ISTA) that solves the optimization problem for sparse nonnegative matrix factorization (NMF) of spectrograms. We name this network architecture deep recurrent NMF (DR-NMF). The proposed DR-NMF network has three distinct advantages. First, DR-NMF provides better interpretability than other deep architectures, since the weights correspond to NMF model parameters, even after training. This interpretability also provides principled initializations that enable faster training and convergence to better solutions compared to conventional random initialization. Second, like many deep networks, DR-NMF is an order of magnitude faster at test time than NMF, since computation of the network output only requires evaluating a few layers at each time step. Third, when a limited amount of training data is available, DR-NMF exhibits stronger generalization and separation performance compared to sparse NMF and state-of-the-art long-short term memory (LSTM) networks. When a large amount of training data is available, DR-NMF achieves lower yet competitive separation performance compared to LSTM networks. Comment: To be presented at WASPAA 2017 |
Databáze: | OpenAIRE |
Externí odkaz: |