Popis: |
The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object is a potential threat and therefore the object not only has to be detected, but classified or identified. Typical 360° scanning LiDAR sensors, like those developed for automotive applications, can be deployed for the detection and tracking of small objects in ranges up to 50 m. Unfortunately, the verification and classification of the detected objects is not possible in most cases, due to the low resolution of that kind of sensor. In visual images a differentiation of flying objects seems more practical. In this paper, we present a method for the distinction between UAVs and birds in multi-sensor data (LiDAR point clouds and visual images). The flying objects are initially detected and tracked in LiDAR data. After detection, a grayscale camera is automatically pointed onto the object and an image is recorded. The differentiation between UAV and bird is then realized by a convolutional neural network (CNN). In addition, we investigate the potential of this approach for a more detailed classification of the type of UAV. The paper shows first results of this multi-sensor classification approach. The high number of training data for the CNN as well as the test data of the experiments were recorded at a field trial of the NATO group SET-260 ("Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment"). |