Gaze-informed multimodal interaction
Autor: | Pernilla Qvarfordt |
---|---|
Rok vydání: | 2017 |
Předmět: |
Joint attention
Modalities Modality (human–computer interaction) business.industry ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Eye movement Gaze Multimodal interaction InformationSystems_MODELSANDPRINCIPLES Human–computer interaction Eye tracking Computer vision Input method Artificial intelligence business Psychology |
Zdroj: | The Handbook of Multimodal-Multisensor Interfaces, Volume 1 (1) |
DOI: | 10.1145/3015783.3015794 |
Popis: | Observe a person pointing out and describing something. Where is that person looking? Chances are good that this person also looks at what she is talking about and pointing at. Gaze is naturally coordinated with our speech and hand movements. By utilizing this tendency, we can create a natural interaction with computing devices and environments. In multimodal gaze interaction, data from eye trackers are used as an active input mode, where, for instance, gaze is used as an alternative, or complementary, pointing modality along with other input modalities. Using gaze as an active, or explicit, input method is challenging for several reasons. One of them being that eyes are primarily used for perceiving our environment, so knowing when a person selects an item with gaze versus just looking around is an issue. Researchers have tried to solve this by combining gaze with various input methods, such as manual pointing, speech, touch, etc.However, gaze information can also be used in interactive systems for other purposes than explicit pointing, since a user's gaze is a good indication of the user's attention. In passive gaze interaction, the gaze is not used as the primary input method, but as a supporting one. In these kinds of systems, gaze is mainly used for inferring and reasoning about the user's cognitive state or activities in a way that can support the interaction. These kinds of multimodal systems often combine gaze with a multitude of input modalities. One example is to detect what features in an image a person is looking for, and use this information to suggest regions or other images that the person has not yet seen.In this chapter, eye movement and eye tracking data analysis is first reviewed (Section 9.2), followed by a discussion of eye movements in relation to other modalities (Section 9.3) to gain basic knowledge about eye tracking and gaze behavior. In Section 9.4, systems using gaze as active or passive input methods are discussed. Finally, Section 9.5 concludes the chapter. |
Databáze: | OpenAIRE |
Externí odkaz: |