Visual speech perception experiments using a video speech synthesizer
Autor: | M. McGrath, A. Q. Summerfield, N. M. Brooke |
---|---|
Rok vydání: | 1984 |
Předmět: |
Range (music)
Speech perception Acoustics and Ultrasonics Computer science Acoustics Speech recognition media_common.quotation_subject Speech synthesis Animation computer.software_genre Frame rate Identification (information) Arts and Humanities (miscellaneous) Perception Articulatory gestures computer media_common |
Zdroj: | The Journal of the Acoustical Society of America. 76:S81-S81 |
ISSN: | 0001-4966 |
Popis: | A general‐purpose computer‐graphics package has been implemented for displaying animated simulations of a variable range of facial topographies, shapes, and articulatory gestures in realtime, at a rate of 50 frames per second. The topographies can now include features, like the teeth, which may be only intermittently visible. Animation is achieved by supplying streams of time‐varying positional data, presently obtained from measurements of talkers speaking VCV or CVC utterances, but strategies for synthesis by interpolating between successive target configurations will also be discussed. The package can generate (a) spatially and temporally graded continua of stimuli, including stimuli in which the movements of normally related articulators are deliberately decoupled, (b) different subsets of articulators, and (c) different talkers. Experience gained from a prototypical identification experiment using /aCV/ utterances has been used to develop a refined facial model which is being applied to study the relative importance of lip and teeth movements in the perception of nondiphthongal vowels. The results will be reported. [Work supported by MRC.] |
Databáze: | OpenAIRE |
Externí odkaz: |