Increasing GPS Localization Accuracy With Reinforcement Learning

Automated vehicles are envisioned to be an integral part of the next generation of transportation systems. Whether it is striving for full autonomy or incorporating more advanced driver assistance systems, high-accuracy vehicle localization is essential for automated vehicles to navigate the transportation network safely. In this paper, the authors propose a reinforcement learning framework to increase GPS localization accuracy. The framework does not make rigid assumptions on the GPS device hardware parameters or motion models, nor does it require infrastructure-based reference locations. The proposed reinforcement learning model learns an optimal strategy to make “corrections” on raw GPS observations. The model uses an efficient confidence-based reward mechanism, which is independent of geolocation, thereby enabling the model to be generalized. The authors incorporate a map matching-based regularization term to reduce the variance of the reward return. The reinforcement learning model is constructed using the asynchronous advantage actor-critic (A3C) algorithm. A3C provides a parallel training protocol to train the proposed model. The asynchronous reinforcement learning strategy facilitates short training sessions and provides more robust performance. The performance of the proposed model is assessed by comparing it with an extended Kalman filter algorithm as a benchmark model. The authors' experiments indicate that the proposed reinforcement learning model converges fast, has less prediction variance, and can localize vehicles with 50% less error compared to the benchmark Extended Kalman Filter model.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01773594
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: May 31 2021 8:19PM