Popis: |
Due to the limited computational resources of portable devices, target detection models for drone detection face challenges in real-time deployment. To enhance the detection efficiency of low, slow, and small unmanned aerial vehicles (UAVs), this study introduces an efficient drone detection model based on YOLOv5s (EDU-YOLO), incorporating lightweight feature extraction and balanced feature fusion modules. The model employs the ShuffleNetV2 network and coordinate attention mechanisms to construct a lightweight backbone network, significantly reducing the number of model parameters. It also utilizes a bidirectional feature pyramid network and ghost convolutions to build a balanced neck network, enriching the model’s representational capacity. Additionally, a new loss function, EloU, replaces CIoU to improve the model’s positioning accuracy and accelerate network convergence. Experimental results indicate that, compared to the YOLOv5s algorithm, our model only experiences a minimal decrease in mAP by 1.1%, while reducing GFLOPs from 16.0 to 2.2 and increasing FPS from 153 to 188. This provides a substantial foundation for networked optoelectronic detection of UAVs and similar slow-moving aerial targets, expanding the defensive perimeter and enabling earlier warnings. |