Improving Multisensor Positioning of Land Vehicles with Integrated Visual Odometry for Next-Generation Self-Driving Cars

For their complete realization, autonomous vehicles (AVs) fundamentally rely on the Global Navigation Satellite System (GNSS) to provide positioning and navigation information. However, in area such as urban cores, parking lots, and under dense foliage, which are all commonly frequented by AVs, GNSS signals suffer from blockage, interference, and multipath. These effects cause high levels of errors and long durations of service discontinuity that mar the performance of current systems. The prevalence of vision and low-cost inertial sensors provides an attractive opportunity to further increase the positioning and navigation accuracy in such GNSS-challenged environments. This paper presents enhancements to existing multisensor integration systems utilizing the inertial navigation system (INS) to aid in Visual Odometry (VO) outlier feature rejection. A scheme called Aided Visual Odometry (AVO) is developed and integrated with a high performance mechanization architecture utilizing vehicle motion and orientation sensors. The resulting solution exhibits improved state covariance convergence and navigation accuracy, while reducing computational complexity. Experimental verification of the proposed solution is illustrated through three real road trajectories, over two different land vehicles, and using two low-cost inertial measurement units (IMUs).

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01675918
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Jul 24 2018 10:07AM