Improving Visual Feature Extraction in Glacial Environments
Autor: | Shoya Higa, Aaron Parness, Steven Morad, Kobus Barnard, Russell C. Smith, Jeremy Nash |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
0209 industrial biotechnology Control and Optimization 010504 meteorology & atmospheric sciences Machine vision Computer science Computer Vision and Pattern Recognition (cs.CV) Feature extraction Biomedical Engineering ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Computer Science - Computer Vision and Pattern Recognition Scale-invariant feature transform Terrain 02 engineering and technology 01 natural sciences Computer Science - Robotics 020901 industrial engineering & automation Artificial Intelligence Computer vision Visual odometry 0105 earth and related environmental sciences Ground truth Orientation (computer vision) business.industry Mechanical Engineering Computer Science Applications Human-Computer Interaction Control and Systems Engineering Feature (computer vision) Computer Vision and Pattern Recognition Artificial intelligence business Robotics (cs.RO) |
Popis: | Glacial science could benefit tremendously from autonomous robots, but previous glacial robots have had perception issues in these colorless and featureless environments, specifically with visual feature extraction. This translates to failures in visual odometry and visual navigation. Glaciologists use near-infrared imagery to reveal the underlying heterogeneous spatial structure of snow and ice, and we theorize that this hidden near-infrared structure could produce more and higher quality features than available in visible light. We took a custom camera rig to Igloo Cave at Mt. St. Helens to test our theory. The camera rig contains two identical machine vision cameras, one which was outfitted with multiple filters to see only near-infrared light. We extracted features from short video clips taken inside Igloo Cave at Mt. St. Helens, using three popular feature extractors (FAST, SIFT, and SURF). We quantified the number of features and their quality for visual navigation by comparing the resulting orientation estimates to ground truth. Our main contribution is the use of NIR longpass filters to improve the quantity and quality of visual features in icy terrain, irrespective of the feature extractor used. |
Databáze: | OpenAIRE |
Externí odkaz: |