Multi‐dimensional weighted cross‐attention network in crowded scenes

Autor: Yue Xi, Nailiang Kuang, Xuan Hou, Irfan Raza Naqvi, Jiangbin Zheng, Yefan Xie
Rok vydání: 2021
Předmět:
Zdroj: IET Image Processing, Vol 15, Iss 14, Pp 3585-3598 (2021)
ISSN: 1751-9667
1751-9659
DOI: 10.1049/ipr2.12298
Popis: Human detection in crowded scenes is one of the research components of crowd safety problem analysis, such as emergency warning and security monitoring platforms. Although the existing anchor‐free methods have fast inference speed, they are not suitable for object detection in crowded scenes due to the model's inability to predict the well‐fined object detection bounding boxes. This work proposes an end‐to‐end anchor‐free network, Multi‐dimensional Weighted Cross‐Attention Network (MANet), which can perform real‐time human detection in crowded scenes. Specifically, the Double‐flow Weighted Feature Cascade Module (DW‐FCM) is used in the extractor to highlight the contribution of features at different levels. The Triplet Cross Attention Module (TCAM) is used in the detector head to enhance the association dependence of multi‐dimension features, further strengthening human boundary features' discrimination ability at a fine‐grained level. Moreover, the strategy of Adaptively Opposite Thrust Mapping (AOTM) ground‐truth annotation is proposed to achieve bias correction of erroneous mappings and reduce the iterations of useless learning of the network. These strategies effectively alleviate the defect that the existing anchor‐free network cannot correctly distinguish and locate the individual human in crowded scenes. Compared with the anchor‐based detection method, there is no need to set anchor parameters manually, and the detection speed can satisfy the real‐time application. Finally, through extensive comparative experiments on CrowdHuman and WIDER FACE datasets, the results demonstrate that the improved strategy achieves the state‐of‐the‐art result in the anchor‐free methods.
Databáze: OpenAIRE