Real-time lip-synch face animation driven by human voice

Autor: Fu Jie Huang, Tsuhan Chen
Rok vydání: 2002
Předmět:
Zdroj: MMSP
DOI: 10.1109/mmsp.1998.738959
Popis: In this demo, we present a technique for synthesizing the mouth movement from acoustic speech information. The algorithm maps the audio parameter set to the visual parameter set using the Gaussian mixture model and the hidden Markov model. With this technique, we can create smooth and realistic lip movements.
Databáze: OpenAIRE