Silent versus modal multi-speaker speech recognition from ultrasound and video
Autor: | Steve Renals, Korin Richmond, Aciel Eshky, Manuel Sam Ribeiro |
---|---|
Rok vydání: | 2021 |
Předmět: |
Convex hull
FOS: Computer and information sciences Sound (cs.SD) ultrasound tongue imaging Computer science Speech recognition Adaptation (eye) video lip imaging Quantitative Biology - Quantitative Methods Computer Science - Sound FMLLR Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering silent speech Quantitative Methods (q-bio.QM) Computer Science - Computation and Language articulatory speech recognition business.industry Ultrasound silent speech interfaces Modal Duration (music) FOS: Biological sciences business Computation and Language (cs.CL) Utterance Word (computer architecture) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | Ribeiro, M S, Eshky, A, Richmond, K & Renals, S 2021, Silent versus modal multi-speaker speech recognition from ultrasound and video . in 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021 . Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 641-645, Interspeech 2021, Brno, Czech Republic, 30/08/21 . https://doi.org/10.21437/Interspeech.2021-23 |
DOI: | 10.48550/arxiv.2103.00333 |
Popis: | We investigate multi-speaker speech recognition from ultrasound images of the tongue and video images of the lips. We train our systems on imaging data from modal speech, and evaluate on matched test sets of two speaking modes: silent and modal speech. We observe that silent speech recognition from imaging data underperforms compared to modal speech recognition, likely due to a speaking-mode mismatch between training and testing. We improve silent speech recognition performance using techniques that address the domain mismatch, such as fMLLR and unsupervised model adaptation. We also analyse the properties of silent and modal speech in terms of utterance duration and the size of the articulatory space. To estimate the articulatory space, we compute the convex hull of tongue splines, extracted from ultrasound tongue images. Overall, we observe that the duration of silent speech is longer than that of modal speech, and that silent speech covers a smaller articulatory space than modal speech. Although these two properties are statistically significant across speaking modes, they do not directly correlate with word error rates from speech recognition. Comment: 5 pages, 5 figures, Submitted to Interspeech 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |