BERTphone: Phonetically-aware Encoder Representations for Utterance-level Speaker and Language Recognition

Autor: Katrin Kirchhoff, Yuzong Liu, Julian Salazar, Shaoshi Ling
Rok vydání: 2020
Předmět:
Zdroj: Odyssey
DOI: 10.21437/odyssey.2020-2
Popis: We introduce BERTphone, a Transformer encoder trained on large speech corpora that outputs phonetically-aware contextual representation vectors that can be used for both speaker and language recognition. This is accomplished by training on two objectives: the first, inspired by adapting BERT to the continuous domain, involves masking spans of input frames and reconstructing the whole sequence for acoustic representation learning; the second, inspired by the success of bottleneck features from ASR, is a sequence-level CTC loss applied to phoneme labels for phonetic representation learning. We pretrain two BERTphone models (one on Fisher and one on TED-LIUM) and use them as feature extractors into x-vector-style DNNs for both tasks. We attain a state-of-the-art $C_{\text{avg}}$ of 6.16 on the challenging LRE07 3sec closed-set language recognition task. On Fisher and VoxCeleb speaker recognition tasks, we see an 18% relative reduction in speaker EER when training on BERTphone vectors instead of MFCCs. In general, BERTphone outperforms previous phonetic pretraining approaches on the same data. We release our code and models at https://github.com/awslabs/speech-representations.
Odyssey 2020 camera-ready (presented Nov. 2020)
Databáze: OpenAIRE