A silent speech system based on permanent magnet articulography and direct synthesis
Autor: | Lam Aun Cheah, Jie Bai, Roger K. Moore, Stephen R. Ell, Phil D. Green, James M. Gilbert, José A. González |
---|---|
Rok vydání: | 2016 |
Předmět: |
Voice activity detection
Audio signal Computer science Speech recognition Acoustic model Speech synthesis Intelligibility (communication) Speech processing computer.software_genre 01 natural sciences Theoretical Computer Science Human-Computer Interaction 030507 speech-language pathology & audiology 03 medical and health sciences Silent speech interface Generative model 0103 physical sciences 0305 other medical science 010301 acoustics computer Software |
Zdroj: | Computer Speech & Language. 39:67-87 |
ISSN: | 0885-2308 |
DOI: | 10.1016/j.csl.2016.02.002 |
Popis: | HighlightsThis paper introduces a 'Silent Speech Interface' with the potential to restore the power of speech to people who have completely lost their voices.Small, unobtrusive magnets are attached to the lips and tongues and changes in magnetic field are sensed as the 'speaker' mouths what s/he wants to say.The sensor data is transformed to acoustic data by a speaker-dependent, learned transformation over parallel acoustic and sensor data.The machine learning technique used here is Mixture of Factor Analysis.Results are presented for 3 speakers which demonstrate that the SSI is capable of producing 'speech' which is both intelligible and natural. In this paper we present a silent speech interface (SSI) system aimed at restoring speech communication for individuals who have lost their voice due to laryngectomy or diseases affecting the vocal folds. In the proposed system, articulatory data captured from the lips and tongue using permanent magnet articulography (PMA) are converted into audible speech using a speaker-dependent transformation learned from simultaneous recordings of PMA and audio signals acquired before laryngectomy. The transformation is represented using a mixture of factor analysers, which is a generative model that allows us to efficiently model non-linear behaviour and perform dimensionality reduction at the same time. The learned transformation is then deployed during normal usage of the SSI to restore the acoustic speech signal associated with the captured PMA data. The proposed system is evaluated using objective quality measures and listening tests on two databases containing PMA and audio recordings for normal speakers. Results show that it is possible to reconstruct speech from articulator movements captured by an unobtrusive technique without an intermediate recognition step. The SSI is capable of producing speech of sufficient intelligibility and naturalness that the speaker is clearly identifiable, but problems remain in scaling up the process to function consistently for phonetically rich vocabularies. |
Databáze: | OpenAIRE |
Externí odkaz: |