Transforming Object Locations on a 2D Visual Display into Cued Locations in 3D Auditory Space

Autor: Erik Brown, Andy Isaacson, Anthony J. Hornof, Tim Halverson
Rok vydání: 2008
Předmět:
Zdroj: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 52:1170-1174
ISSN: 1071-1813
2169-5067
DOI: 10.1177/154193120805201804
Popis: Anthony Hornof, Tim Halverson, Andy Isaacson, and Erik BrownUniversity of OregonEugene, Oregon, USAAn empirical study explored the extent to which people can map locations in auditory space to locations on a visual display for four different transformations (or mappings) between auditory and visual surfaces. Participants were trained in each of four transformations: horizontal square, horizontal arc, vertical square, and vertical spherical surface. On each experimental trial, a sound was played through headphones con-nected to a spatialized sound system that uses a non-individualized head-related transfer function. The par-ticipant’s task was to determine, using one transformation at a time, which of two objects on a visual dis-play corresponded to the location of the sound. Though the two vertical transformations provided a more direct stimulus-response compatibility with the visual display, the two horizontal transformations made better use of the human auditory system’s ability to localize sound, and resulted in better performance. Eye movements were analyzed, and it was found that the horizontal arc transformation provided the best audi-tory cue for moving the eyes to the correct visual target location with a single saccade.Auditory displays are routinely used to keep an operator abreast of what is happening in the visual periphery. Auditory alerts often direct attention to visual displays in cars, aircraft, and computer interfaces. Though characteristics of sound such as pitch, timbre, and timing are good for conveying spe-cific encodings (Gaver, 1997), the physical location of an auditory alert in three-dimensional space can also convey use-ful meaning. The location of an auditory alert in three-dimensional (3D) space could, for example, help an air traffic controller to direct his or her visual attention to a particular blip on a radar screen.Previous research has examined the extent to which peo-ple can discriminate the precise location of auditory stimuli. There are a range of results in terms of how accurately people can locate a sound in space, depending on a range of experi-mental conditions, such physical versus virtual localization (Wightman & Kistler, 1989), the use of non-individualized head-related transfer functions (Wenzel, Arruda, Kistler, & Wightman, 1993), and egocentric versus exocentric localiza-tion (Simpson, et al., 2007). In general, people can distinguish the locations of auditory sound sources better when there is greater separation between them, requiring roughly 9° of azi-muth or 12° of elevation (Begault, 1994, p. 67; Grantham, Hornsby, & Erpenbeck, 2003).Additional research has investigated the utility of spatial-ized audio to locate visual targets. In general, people can dis-tinguish the locations of aurally-cued visual targets better when the visual display is sparse and the auditory cues are reliable (Perrott, Sadralodabai, & Saberi, 1991; Vu & Strybel, 2006). However, little if any research has explored the poten-tial benefits of transforming a small visual region into a larger auditory space. If spatialized audio is to be used to direct vis-ual attention to a location on a small visual display, the best spatial resolution might be obtained if the visual display is expanded and transformed into a larger auditory space, but there are many possible ways to make this transformation.The experiment presented here explores the extent to which people can map locations in auditory space to locations in visual space for four different transformations (or map-pings) between auditory and visual space. The goal is to pro-vide a specific recommendation for how to best convey the location of an object on a 2D visual display using 3D audio.
Databáze: OpenAIRE