Increasing GPS Localization Accuracy With Reinforcement Learning
Autor: | Ethan Zhang, Neda Masoud |
---|---|
Rok vydání: | 2021 |
Předmět: |
050210 logistics & transportation
business.industry Computer science Mechanical Engineering 05 social sciences Advanced driver assistance systems Map matching Kalman filter Machine learning computer.software_genre Computer Science Applications Extended Kalman filter 0502 economics and business Automotive Engineering Benchmark (computing) Global Positioning System Reinforcement learning Artificial intelligence Hidden Markov model business computer |
Zdroj: | IEEE Transactions on Intelligent Transportation Systems. 22:2615-2626 |
ISSN: | 1558-0016 1524-9050 |
Popis: | Automated vehicles are envisioned to be an integral part of the next generation of transportation systems. Whether it is striving for full autonomy or incorporating more advanced driver assistance systems, high-accuracy vehicle localization is essential for automated vehicles to navigate the transportation network safely. In this paper, we propose a reinforcement learning framework to increase GPS localization accuracy. The framework does not make rigid assumptions on the GPS device hardware parameters or motion models, nor does it require infrastructure-based reference locations. The proposed reinforcement learning model learns an optimal strategy to make “corrections” on raw GPS observations. The model uses an efficient confidence-based reward mechanism, which is independent of geolocation, thereby enabling the model to be generalized. We incorporate a map matching-based regularization term to reduce the variance of the reward return. The reinforcement learning model is constructed using the asynchronous advantage actor-critic (A3C) algorithm. A3C provides a parallel training protocol to train the proposed model. The asynchronous reinforcement learning strategy facilitates short training sessions and provides more robust performance. The performance of the proposed model is assessed by comparing it with an extended Kalman filter algorithm as a benchmark model. Our experiments indicate that the proposed reinforcement learning model converges fast, has less prediction variance, and can localize vehicles with 50% less error compared to the benchmark Extended Kalman Filter model. |
Databáze: | OpenAIRE |
Externí odkaz: |