Popis: |
Visual speech recognition is emerging as an important research area in human–computer interaction. Most of the work done in this area has focused on lip-reading using the frontal view of the speaker or on views available from multiple cameras. However, in absence of views available from different angles, profile information from the speech articulators is lost. This chapter tries to estimate lip protrusion from images available from only the frontal pose of the speaker. With our proposed methodology, an estimated computation of lip profile information from frontal features, increases system efficiency in absence of expensive hardware and without adding to computation overheads. We also show that lip protrusion is a key speech articulator and that other prominent articulators are contained within the centre area of the mouth. |