Autor: |
Yajun Xu, Shogo Arai, Fuyuki Tokuda, Kazuhiro Kosuge |
Jazyk: |
angličtina |
Rok vydání: |
2020 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 8, Pp 70262-70269 (2020) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2020.2978506 |
Popis: |
3D Instance segmentation is a fundamental task in computer vision. Effective segmentation plays an important role in robotic tasks, augmented reality, autonomous driving, etc. With the ascendancy of convolutional neural networks in 2D image processing, the use of deep learning methods to segment 3D point clouds receives much attention. A great convergence of training loss often requires a large amount of human-annotated data, while making such a 3D dataset is time-consuming. This paper proposes a method for training convolutional neural networks to predict instance segmentation results using synthetic data. The proposed method is based on the SGPN framework. We replaced the original feature extractor with “dynamic graph convolutional neural networks” that learned how to extract local geometric features and proposed a simple and effective loss function, making the network more focused on hard examples. We experimentally proved that the proposed method significantly outperforms the state-of-the-art method in both Stanford 3D Indoor Semantics Dataset and our datasets. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|