TF-Locoformer: Transformer with Local Modeling by Convolution for Speech Separation and Enhancement

Autor: Saijo, Kohei, Wichern, Gordon, Germain, François G., Pan, Zexu, Roux, Jonathan Le
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Time-frequency (TF) domain dual-path models achieve high-fidelity speech separation. While some previous state-of-the-art (SoTA) models rely on RNNs, this reliance means they lack the parallelizability, scalability, and versatility of Transformer blocks. Given the wide-ranging success of pure Transformer-based architectures in other fields, in this work we focus on removing the RNN from TF-domain dual-path models, while maintaining SoTA performance. This work presents TF-Locoformer, a Transformer-based model with LOcal-modeling by COnvolution. The model uses feed-forward networks (FFNs) with convolution layers, instead of linear layers, to capture local information, letting the self-attention focus on capturing global patterns. We place two such FFNs before and after self-attention to enhance the local-modeling capability. We also introduce a novel normalization for TF-domain dual-path models. Experiments on separation and enhancement datasets show that the proposed model meets or exceeds SoTA in multiple benchmarks with an RNN-free architecture.
Comment: Accepted to IWAENC 2024
Databáze: arXiv