Autor: |
Pan Fan, Chusan Zheng, Jin Sun, Dong Chen, Guodong Lang, Yafeng Li |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Agriculture, Vol 14, Iss 7, p 1059 (2024) |
Druh dokumentu: |
article |
ISSN: |
2077-0472 |
DOI: |
10.3390/agriculture14071059 |
Popis: |
The rapid development of artificial intelligence and remote sensing technologies is indispensable for modern agriculture. In orchard environments, challenges such as varying light conditions and shading complicate the tasks of intelligent picking robots. To enhance the recognition accuracy and efficiency of apple-picking robots, this study aimed to achieve high detection accuracy in complex orchard environments while reducing model computation and time consumption. This study utilized the CenterNet neural network as the detection framework, introducing gray-centered RGB color space vertical decomposition maps and employing grouped convolutions and depth-separable convolutions to design a lightweight feature extraction network, Light-Weight Net, comprising eight bottleneck structures. Based on the recognition results, the 3D coordinates of the picking point were determined within the camera coordinate system by using the transformation relationship between the image’s physical coordinate system and the camera coordinate system, along with depth map distance information of the depth map. Experimental results obtained using a testbed with an orchard-picking robot indicated that the proposed model achieved an average precision (AP) of 96.80% on the test set, with real-time performance of 18.91 frames per second (FPS) and a model size of only 17.56 MB. In addition, the root-mean-square error of positioning accuracy in the orchard test was 4.405 mm, satisfying the high-precision positioning requirements of the picking robot vision system in complex orchard environments. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|