Popis: |
This thesis investigates a combination of visual 3D object trackers and 2D object detectors for mutual improvement in accuracy and runtime. In the light of an application in the field of optical navigation, requirements such as high reliability, real-time capability and general applicability are pursued for the implementation. The combined application builds on the framework of the Integrated Positioning System - a multi-sensor system primarily used for self-localization and environment reconstruction. The developed object tracker approach works on point clouds, represents objects through Intrinsic Shape Signatures and Color Signature of Histograms of Orientations and locates objects using an advanced particle filter. As object detector, the Deep Learning based method YOLOv3 is employed. The semantics and the localization of the object detections are used to restrict the search space during object tracking. In return, the object tracking result is projected back onto the image, to additionally supplement object detections and enhance their precision. The combined method is compared to each separate method using established metrics. The experiments are based on complementary datasets from the real world and 3D simulations. Within both datasets, parameter setups, object classes and environmental scenarios are varied according to different attributes. The results show a faster and more accurate object tracking through the object detections. However, due to lower reliability, the proposed object tracker is not suitable for improving YOLOv3 detections. |