Popis: |
Many compartments are prone to pose safety hazards such as loose fasteners or object intrusion due to their confined space, making manual inspection challenging. To address the challenges of complex inspection environments, diverse target categories, and variable scales in confined compartments, this paper proposes a novel GMS-YOLO network, based on the improved YOLOv8 framework. In addition to the lightweight design, this network accurately detects targets by leveraging more precise high-level and low-level feature representations obtained from GhostHGNetv2, which enhances feature-extraction capabilities. To handle the issue of complex environments, the backbone employs GhostHGNetv2 to capture more accurate high-level and low-level feature representations, facilitating better distinction between background and targets. In addition, this network significantly reduces both network parameter size and computational complexity. To address the issue of varying target scales, the first layer of the feature fusion module introduces Multi-Scale Convolutional Attention (MSCA) to capture multi-scale contextual information and guide the feature fusion process. A new lightweight detection head, Shared Convolutional Detection Head (SCDH), is designed to enable the model to achieve higher accuracy while being lighter. To evaluate the performance of this algorithm, a dataset for object detection in this scenario was constructed. The experiment results indicate that compared to the original model, the parameter number of the improved model decreased by 37.8%, the GFLOPs decreased by 27.7%, and the average accuracy increased from 82.7% to 85.0%. This validates the accuracy and applicability of the proposed GMS-YOLO network. |