BERTphone: Phonetically-aware Encoder Representations for Utterance-level Speaker and Language Recognition
Autor: | Katrin Kirchhoff, Yuzong Liu, Julian Salazar, Shaoshi Ling |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Computer Science - Machine Learning Computer Science - Computation and Language Computer science Speech recognition Computer Science - Sound Machine Learning (cs.LG) Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering Computation and Language (cs.CL) Encoder Utterance Electrical Engineering and Systems Science - Audio and Speech Processing Language recognition |
Zdroj: | Odyssey |
DOI: | 10.21437/odyssey.2020-2 |
Popis: | We introduce BERTphone, a Transformer encoder trained on large speech corpora that outputs phonetically-aware contextual representation vectors that can be used for both speaker and language recognition. This is accomplished by training on two objectives: the first, inspired by adapting BERT to the continuous domain, involves masking spans of input frames and reconstructing the whole sequence for acoustic representation learning; the second, inspired by the success of bottleneck features from ASR, is a sequence-level CTC loss applied to phoneme labels for phonetic representation learning. We pretrain two BERTphone models (one on Fisher and one on TED-LIUM) and use them as feature extractors into x-vector-style DNNs for both tasks. We attain a state-of-the-art $C_{\text{avg}}$ of 6.16 on the challenging LRE07 3sec closed-set language recognition task. On Fisher and VoxCeleb speaker recognition tasks, we see an 18% relative reduction in speaker EER when training on BERTphone vectors instead of MFCCs. In general, BERTphone outperforms previous phonetic pretraining approaches on the same data. We release our code and models at https://github.com/awslabs/speech-representations. Odyssey 2020 camera-ready (presented Nov. 2020) |
Databáze: | OpenAIRE |
Externí odkaz: |