Blind people can actively manipulate virtual objects with a novel tactile device.

Autor: Memeo M; Robotics, Brain and Cognitive Sciences Department Now With Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy., Sandini G; Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy., Cocchi E; Istituto David Chiossone per Ciechi e Ipovedenti Onlus, Geona, Italy., Brayda L; Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy. luca.brayda@nextage-on.com.; Acoesis srl, Via Enrico Melen 83, Genoa, Italy. luca.brayda@nextage-on.com.; Nextage srl, Piazza della Vittoria 12, Genova, Italia. luca.brayda@nextage-on.com.
Jazyk: angličtina
Zdroj: Scientific reports [Sci Rep] 2023 Dec 21; Vol. 13 (1), pp. 22845. Date of Electronic Publication: 2023 Dec 21.
DOI: 10.1038/s41598-023-49507-1
Abstrakt: Frequently in rehabilitation, visually impaired persons are passive agents of exercises with fixed environmental constraints. In fact, a printed tactile map, i.e. a particular picture with a specific spatial arrangement, can usually not be edited. Interaction with map content, instead, facilitates the learning of spatial skills because it exploits mental imagery, manipulation and strategic planning simultaneously. However, it has rarely been applied to maps, mainly because of technological limitations. This study aims to understand if visually impaired people can autonomously build objects that are completely virtual. Specifically, we investigated if a group of twelve blind persons, with a wide age range, could exploit mental imagery to interact with virtual content and actively manipulate it by means of a haptic device. The device is mouse-shaped and designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. Spatial information can be mentally constructed by integrating local tactile cues, given by the device, with global proprioceptive cues, given by hand and arm motion. The experiment consisted of a bi-manual task, in which one hand explored some basic virtual objects and the other hand acted on a keyboard to change the position of one object in real-time. The goal was to merge basic objects into more complex objects, like a puzzle. The experiment spanned different resolutions of the tactile information. We measured task accuracy, efficiency, usability and execution time. The average accuracy in solving the puzzle was 90.5%. Importantly, accuracy was linearly predicted by efficiency, measured as the number of moves needed to solve the task. Subjective parameters linked to usability and spatial resolutions did not predict accuracy; gender modulated the execution time, with men being faster than women. Overall, we show that building purely virtual tactile objects is possible in absence of vision and that the process is measurable and achievable in partial autonomy. Introducing virtual tactile graphics in rehabilitation protocols could facilitate the stimulation of mental imagery, a basic element for the ability to orient in space. The behavioural variable introduced in the current study can be calculated after each trial and therefore could be used to automatically measure and tailor protocols to specific user needs. In perspective, our experimental setup can inspire remote rehabilitation scenarios for visually impaired people.
(© 2023. The Author(s).)
Databáze: MEDLINE
Nepřihlášeným uživatelům se plný text nezobrazuje