NeuralFeels with neural fields: Visuotactile perception for in-hand manipulation.

Autor: Suresh S; Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA.; FAIR, Meta, Menlo Park, CA 94025, USA., Qi H; FAIR, Meta, Menlo Park, CA 94025, USA.; Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA 94720, USA., Wu T; FAIR, Meta, Menlo Park, CA 94025, USA., Fan T; FAIR, Meta, Menlo Park, CA 94025, USA., Pineda L; FAIR, Meta, Menlo Park, CA 94025, USA., Lambeta M; FAIR, Meta, Menlo Park, CA 94025, USA., Malik J; FAIR, Meta, Menlo Park, CA 94025, USA.; Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA 94720, USA., Kalakrishnan M; FAIR, Meta, Menlo Park, CA 94025, USA., Calandra R; Institute of Artificial Intelligence, Technische Universität Dresden, 01062 Dresden, Germany.; Centre for Tactile Internet with Human-in-the-Loop (CeTI), 01062 Dresden, Germany., Kaess M; Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA., Ortiz J; FAIR, Meta, Menlo Park, CA 94025, USA., Mukadam M; FAIR, Meta, Menlo Park, CA 94025, USA.
Jazyk: angličtina
Zdroj: Science robotics [Sci Robot] 2024 Nov 13; Vol. 9 (96), pp. eadl0628. Date of Electronic Publication: 2024 Nov 13.
DOI: 10.1126/scirobotics.adl0628
Abstrakt: To achieve human-level dexterity, robots must infer spatial awareness from multimodal sensing to reason over contact interactions. During in-hand manipulation of novel objects, such spatial awareness involves estimating the object's pose and shape. The status quo for in-hand perception primarily uses vision and is restricted to tracking a priori known objects. Moreover, visual occlusion of objects in hand is imminent during manipulation, preventing current systems from pushing beyond tasks without occlusion. We combined vision and touch sensing on a multifingered hand to estimate an object's pose and shape during in-hand manipulation. Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem. We studied multimodal in-hand perception in simulation and the real world, interacting with different objects via a proprioception-driven policy. Our experiments showed final reconstruction F scores of 81% and average pose drifts of 4.7 millimeters, which was further reduced to 2.3 millimeters with known object models. In addition, we observed that, under heavy visual occlusion, we could achieve improvements in tracking up to 94% compared with vision-only methods. Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation. We release our evaluation dataset of 70 experiments, FeelSight, as a step toward benchmarking in this domain. Our neural representation driven by multimodal sensing can serve as a perception backbone toward advancing robot dexterity.
Databáze: MEDLINE