Attention is All You Need in Speech Separation
Autor: | Jianyuan Zhong, Mirko Bronzi, Cem Subakan, Mirco Ravanelli, Samuele Cornell |
---|---|
Rok vydání: | 2020 |
Předmět: |
Signal Processing (eess.SP)
FOS: Computer and information sciences Computer Science - Machine Learning Sound (cs.SD) Artificial neural network Computer science Computation Speech processing Computer Science - Sound Machine Learning (cs.LG) Upsampling Recurrent neural network Computer engineering Audio and Speech Processing (eess.AS) Source separation FOS: Electrical engineering electronic engineering information engineering Electrical Engineering and Systems Science - Signal Processing Representation (mathematics) Transformer (machine learning model) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
DOI: | 10.48550/arxiv.2010.13154 |
Popis: | Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism. In this paper, we propose the SepFormer, a novel RNN-free Transformer-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. The proposed model achieves state-of-the-art (SOTA) performance on the standard WSJ0-2/3mix datasets. It reaches an SI-SNRi of 22.3 dB on WSJ0-2mix and an SI-SNRi of 19.5 dB on WSJ0-3mix. The SepFormer inherits the parallelization advantages of Transformers and achieves a competitive performance even when downsampling the encoded representation by a factor of 8. It is thus significantly faster and it is less memory-demanding than the latest speech separation systems with comparable performance. Comment: Accepted to ICASSP 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |