Extrafoveal attentional capture by object semantics

Autor: Antje Nuthmann, Floor De Groot, Christian N. L. Olivers, Falk Huettig
Přispěvatelé: Cognitive Psychology, IBBA
Rok vydání: 2018
Předmět:
Male
Fovea Centralis
Visual perception
Eye Movements
Computer science
Physiology
Visual System
Vision
Computer Vision
Sensory Physiology
Social Sciences
Random Allocation
0302 clinical medicine
Foveal
Data_FILES
Medicine and Health Sciences
Psychology
Attention
Multidisciplinary
Experimental Design
05 social sciences
Visual cognition
Sensory Systems
Semantics
Pattern Recognition
Visual

Research Design
Saccade
Physical Sciences
ComputingMethodologies_DOCUMENTANDTEXTPROCESSING
Visual Perception
Medicine
Sensory Perception
Female
Cognitive psychology
Research Article
Adult
Computer and Information Sciences
Adolescent
Science
Geometry
Fixation
Ocular

Stimulus (physiology)
Research and Analysis Methods
050105 experimental psychology
03 medical and health sciences
Young Adult
Reaction Time
Saccades
Humans
0501 psychology and cognitive sciences
Vision
Ocular

Cognitive Psychology
Eye movement
Biology and Life Sciences
Correction
Linguistics
Language & Communication
Target Detection
Lexical Semantics
Radii
Cognitive Science
030217 neurology & neurosurgery
Mathematics
Neuroscience
Zdroj: PLoS ONE
PLoS One, 14, 5
PLoS ONE, Vol 14, Iss 5, p e0217051 (2019)
PLoS One
PLoS ONE, 14(5):e0217051. Public Library of Science
PLoS One, 14
Nuthmann, A, de Groot, F, Huettig, F & Olivers, C N L 2019, ' Extrafoveal attentional capture by object semantics ', PLoS ONE, vol. 14, no. 5, e0217051, pp. e0217051 . https://doi.org/10.1371/journal.pone.0217051
ISSN: 1932-6203
DOI: 10.1371/journal.pone.0217051
Popis: There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
Databáze: OpenAIRE