Flexible constraint hierarchy during the visual encoding of tool-object interactions.

Autor: Bayani KYT; School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA., Natraj N; School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA.; Weill Institute of Neurosciences, University of California, San Francisco, California, USA., Gale MK; School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA., Temples D; School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA., Atawala N; School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA., Wheaton LA; School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA.
Jazyk: angličtina
Zdroj: The European journal of neuroscience [Eur J Neurosci] 2021 Oct; Vol. 54 (7), pp. 6520-6532. Date of Electronic Publication: 2021 Sep 27.
DOI: 10.1111/ejn.15460
Abstrakt: Tools and objects are associated with numerous action possibilities that are reduced depending on the task-related internal and external constraints presented to the observer. Action hierarchies propose that goals represent higher levels of the hierarchy while kinematic patterns represent lower levels of the hierarchy. Prior work suggests that tool-object perception is heavily influenced by grasp and action context. The current study sought to evaluate whether the presence of action hierarchy can be perceptually identified using eye tracking during tool-object observation. We hypothesize that gaze patterns will reveal a perceptual hierarchy based on the observed task context and grasp constraints. Participants viewed tool-objects scenes with two types of constraints: task-context and grasp constraints. Task-context constraints consisted of correct (e.g., frying pan-spatula) and incorrect tool-object pairings (e.g., stapler-spatula). Grasp constraints involved modified tool orientations, which requires participants to understand how initially awkward grasp postures can help achieve the task. The visual scene contained three areas of interests (AOIs): the object, the functional tool-end (e.g., spoon handle) and the manipulative tool-end (e.g., spoon bowl). Results revealed two distinct processes based on stimuli constraints. Goal-oriented encoding, the attentional bias towards the object and manipulative tool-end, was demonstrated when grasp did not lead to meaningful tool-use. In images where grasp postures were critical to action performance, attentional bias was primarily between the object and functional tool-end, which suggests means-related encoding of the graspable properties of the object. This study expands from previous work and demonstrates a flexible constraint hierarchy depending on the observed task constraints.
(© 2021 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.)
Databáze: MEDLINE