MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network
Autor: | Zhanpeng Zhang, Jingcheng Su, Junhao Cai, Hui Cheng |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
TheoryofComputation_MISCELLANEOUS 0209 industrial biotechnology Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Deep learning Training system GRASP Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology Machine learning computer.software_genre Computer Science - Robotics 020901 industrial engineering & automation Robustness (computer science) Grippers 0202 electrical engineering electronic engineering information engineering Robot 020201 artificial intelligence & image processing Artificial intelligence Affordance business Robotics (cs.RO) computer |
Zdroj: | ICRA |
Popis: | Data-driven approach for grasping shows significant advance recently. But these approaches usually require much training data. To increase the efficiency of grasping data collection, this paper presents a novel grasp training system including the whole pipeline from data collection to model inference. The system can collect effective grasp sample with a corrective strategy assisted by antipodal grasp rule, and we design an affordance interpreter network to predict pixelwise grasp affordance map. We define graspability, ungraspability and background as grasp affordances. The key advantage of our system is that the pixel-level affordance interpreter network trained with only a small number of grasp samples under antipodal rule can achieve significant performance on totally unseen objects and backgrounds. The training sample is only collected in simulation. Extensive qualitative and quantitative experiments demonstrate the accuracy and robustness of our proposed approach. In the real-world grasp experiments, we achieve a grasp success rate of 93% on a set of household items and 91% on a set of adversarial items with only about 6,300 simulated samples. We also achieve 87% accuracy in clutter scenario. Although the model is trained using only RGB image, when changing the background textures, it also performs well and can achieve even 94% accuracy on the set of adversarial objects, which outperforms current state-of-the-art methods. Comment: 7 pages, 10 figures, IEEE International Conference on Robotics and Automation 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |