Object weight can be rapidly predicted, with low cognitive load, by exploiting learned associations between the weights and locations of objects.

Autor: Zhang Z; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York.; Department of Neuroscience, Columbia University, New York, New York., Cesanek E; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York.; Department of Neuroscience, Columbia University, New York, New York., Ingram JN; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York.; Department of Neuroscience, Columbia University, New York, New York., Flanagan JR; Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada., Wolpert DM; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York.; Department of Neuroscience, Columbia University, New York, New York.
Jazyk: angličtina
Zdroj: Journal of neurophysiology [J Neurophysiol] 2023 Feb 01; Vol. 129 (2), pp. 285-297. Date of Electronic Publication: 2022 Nov 09.
DOI: 10.1152/jn.00414.2022
Abstrakt: Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lowering working memory demands involved in predicting weight from object visual properties. NEW & NOTEWORTHY We use a novel object support task using a three-dimensional robotic interface and virtual reality system to provide evidence that the locations of objects are used to predict their weights. Using location information, rather than the visual appearance of the objects, supports fast prediction, thereby avoiding processes that can be demanding on working memory.
Databáze: MEDLINE