A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep Learning and Synthetic Imagery
Autor: | Jacob Deutsch, Carson Schubert, Daniel Fonseka, Gavin Martin, Maruthi R. Akella, Nihal Dhamani, Kevin Black, Abhimanyu Dhir |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning 010504 meteorology & atmospheric sciences Computer science Computer Vision and Pattern Recognition (cs.CV) Real-time computing Computer Science - Computer Vision and Pattern Recognition Image processing Cloud computing 02 engineering and technology 01 natural sciences Machine Learning (cs.LG) Computer Science - Software Engineering 0202 electrical engineering electronic engineering information engineering 0105 earth and related environmental sciences Data curation business.industry Deep learning Pipeline (software) Software Engineering (cs.SE) Scalability Custom software 020201 artificial intelligence & image processing Artificial intelligence business 3D computer graphics |
Zdroj: | 2021 IEEE Aerospace Conference (50100). |
DOI: | 10.1109/aero50100.2021.9438232 |
Popis: | Deep learning has become the gold standard for image processing over the past decade. Simultaneously, we have seen growing interest in orbital activities such as satellite servicing and debris removal that depend on proximity operations between spacecraft. However, two key challenges currently pose a major barrier to the use of deep learning for vision-based on-orbit proximity operations. Firstly, efficient implementation of these techniques relies on an effective system for model development that streamlines data curation, training, and evaluation. Secondly, a scarcity of labeled training data (images of a target spacecraft) hinders creation of robust deep learning models. This paper presents an open-source deep learning pipeline, developed specifically for on-orbit visual navigation applications, that addresses these challenges. The core of our work consists of two custom software tools built on top of a cloud architecture that interconnects all stages of the model development process. The first tool leverages Blender, an open-source 3D graphics toolset, to generate labeled synthetic training data with configurable model poses (positions and orientations), lighting conditions, backgrounds, and commonly observed in-space image aberrations. The second tool is a plugin-based framework for effective dataset curation and model training; it provides common functionality like metadata generation and remote storage access to all projects while giving complete independence to project-specific code. Time-consuming, graphics-intensive processes such as synthetic image generation and model training run on cloud-based computational resources which scale to any scope and budget and allow development of even the largest datasets and models from any machine. The presented system has been used in the Texas Spacecraft Laboratory with marked benefits in development speed and quality. Accepted to IEEE Aerospace Conference 2021. 14 pages, 11 figures |
Databáze: | OpenAIRE |
Externí odkaz: |