Sequential Data Fusion of GNSS Pseudoranges and Dopplers With Map-Based Vision Systems

Tightly coupling Global Navigation Satellite System (GNSS) pseudorange and Doppler measurements with other sensors is known to increase the accuracy and consistency of positioning information. Nowadays, high-accuracy geo-referenced lane marking maps are seen as key information sources in autonomous vehicle navigation. When an exteroceptive sensor such as a video camera or a lidar is used to detect them, lane markings provide positioning information which can be merged with GNSS data. In this paper, measurements from a forward-looking video camera are merged with raw GNSS pseudoranges and Dopplers on visible satellites. To create a localization system that provides pose estimates with high availability, dead reckoning sensors are also integrated. The data fusion problem is then formulated as sequential filtering. A reduced-order state space modeling of the observation problem is proposed to give a real-time system that is easy to implement. A Kalman filter with measured input and correlated noises is developed using a suitable error model of the GNSS pseudoranges. The authors' experimental results show that this tightly coupled approach performs better, in terms of accuracy and consistency, than a loosely coupled method using GNSS fixes as inputs.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01633995
  • Record Type: Publication
  • Files: TRIS
  • Created Date: May 1 2017 9:37AM