Multiscale Site Matching for Vision-Only Self-Localization of Intelligent Vehicles

Self-localization is a challenging issue in intelligent vehicle (IV) systems. Traditional self-localization methods, such as the Global Navigation Satellite System (GNSS), Inertial Navigation System (INS) and vision simultaneous localization and mapping (vSLAM), are subject to low accuracy, high cost or low robustness. To this end, this paper proposes a new multi-scale site matching localization (MS2ML) method for IV systems by using one single monocular camera. The MS2ML consists of a coarse localization, an image-level localization and a metric localization. In coarse localization, the proposed MS2ML calls the Bayesian vision-motion topological localization to obtain a set of nodes from a visual map. Furthermore, the holistic feature is generated for each query image, and hence, the holistic feature matching is implemented to realize image-level localization. A node is then selected from the candidate nodes. In metric localization, the closest node and vehicle pose are calculated through matching local features with three-dimension (3D) data. In order to evaluate the proposed MS2ML, real-world driving tests have been carried out in three different routes, two of which are from an urban roadway and an industrial park in Wuhan, China and the third one is from public KITTI (Karlsruhe Institute of Technology and Toyota Technology Institute) data set. The total lengths of these routes are more than 7 km. The experiment results demonstrate that the average localization errors of the proposed MS2ML method are less than 0.45 frame and the pose errors are less than 0.59 m. As a result, the proposed method remains high accuracy and great robustness in various environments.


  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01678571
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Aug 2 2018 2:56PM