Autor: |
Han, Huiyan, Wang, Wenjun, Han, Xie, Yang, Xiaowen |
Zdroj: |
Intelligent Service Robotics; Mar2024, Vol. 17 Issue 2, p251-264, 14p |
Abstrakt: |
Grasping objects poses a significant challenge for autonomous robotic manipulation in unstructured and cluttered environments. Despite recent advancements in 6-DoF (degree of freedom) grasp pose estimation, the majority of existing methods fail to differentiate between points on adjacent objects, particularly in scenarios where objects are positioned next to each other. This imprecise orientation often results in collisions or unsuccessful grasps. To address these challenges, this paper proposes a semantic instance reconstruction grasp network (SIRGN) that efficiently generates accurate grasping configurations. Firstly, the foreground objects are reconstructed using the implicit semantic instance branch. Through voting, we predict the corresponding instance for each foreground point and thereby differentiate adjacent objects. Secondly, to enhance the accuracy of grasping orientation, we decompose the 3D rotation matrix into two orthogonal unit vectors. Furthermore, the network is trained using VGN simulation grasping datasets. The results of declutter experiments demonstrate that the grasp success rate of SIRGN in packed and pile scenes is 89.5% and 78.1%, respectively. Experiments conducted in both simulated and real environments have fully demonstrated the effectiveness of the proposed methodology. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|