Autor: |
Lihe Hu, Yi Zhang, Yang Wang, Huan Yang, Shuyi Tan |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
Applied Sciences, Vol 13, Iss 6, p 3576 (2023) |
Druh dokumentu: |
article |
ISSN: |
2076-3417 |
DOI: |
10.3390/app13063576 |
Popis: |
Semantic mapping can help robots better understand the environment and is extensively studied in robotics. However, it is a challenge for semantic mapping that calibrates all the obstacles with semantics. We propose integrating two network models to realize the salient semantic segmentation used for mobile robot mapping, which differs from traditional segmentation methods. Firstly, we detected salient objects. The detection result was the grayscale image form, which was recognized and annotated by our trained model. Then, we projected the salient objects’ contour with semantics to the corresponding RGB image, which realized the salient objects’ semantic segmentation. We treated the salient objects instead of all the obstacles as semantic segmentation objects that could reduce the background consideration. The neural network model trained based on the salient object’s shape information was stable for object recognition and easy for model training. We only used the shape feature for training, which could reduce the calculation amount of feature details. Experiments demonstrated that the algorithm could quickly realize the model’s training and provide a semantic landmark in the point cloud map as the relative position reference for robot repositioning when the map needs to be used again and exist in a similar environment. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|