GAZE AND FEET AS ADDITIONAL INPUT MODALITIES FOR INTERACTING WITH GEOSPATIAL INTERFACES

Autor: Raimund Dachselt, Ioannis Giannopoulos, Arzu Çöltekin, Sophie Stellmach, Julia Hempel, Alzbeta Brychtova
Přispěvatelé: Halounova, L., Li, S., Šafář, V., Tomková, M., Rapant, P., Brázdil, K., Shi, W., Anton, F., Liu, Y., Stein, A., Cheng, T., Pettit, C., Li, Q.-Q., Sester, M., Mostafavi, M.A., Madden, M., Tong, X., Brovelli, M.A., Haekyong, K., Kawashima, H., Çöltekin, A., University of Zurich, Cöltekin, Arzu
Jazyk: angličtina
Rok vydání: 2016
Předmět:
lcsh:Applied optics. Photonics
Geospatial analysis
Iterative design
Multimodal Input
Computer science
Process (engineering)
Interfaces
Usability
0211 other engineering and technologies
Foot Interaction
02 engineering and technology
computer.software_genre
lcsh:Technology
User Interfaces
Gaze Interaction
GIS
Data processing
computer science

Mode (computer interface)
Human–computer interaction
0202 electrical engineering
electronic engineering
information engineering

ddc:550
Zoom
910 Geography & travel
021101 geological & geomatics engineering
lcsh:T
1901 Earth and Planetary Sciences (miscellaneous)
3105 Instrumentation
lcsh:TA1501-1820
020207 software engineering
2301 Environmental Science (miscellaneous)
Gaze
Earth sciences
10122 Institute of Geography
lcsh:TA1-2040
User interface
ddc:004
lcsh:Engineering (General). Civil engineering (General)
computer
Gesture
Zdroj: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol III-2, Pp 113-120 (2016)
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, III-2
ISSN: 2194-9050
2194-9042
Popis: Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, III-2
ISSN:2194-9042
ISSN:2194-9050
Databáze: OpenAIRE