Vision-Only Localization

Autonomous and intelligent vehicles will undoubtedly depend on an accurate ego localization solution. Global navigation satellite systems suffer from multipath propagation rendering this solution insufficient. Herein, the authors present a real-time system for six-degrees-of-freedom ego localization that uses only a single monocular camera. The camera image is harnessed to yield an ego pose relative to a previously computed visual map. The authors describe a process to automatically extract the ingredients of this map from stereoscopic image sequences. These include a mapping trajectory relative to the first pose, global scene signatures and local landmark descriptors. The localization algorithm then consists of a topological localization step that completely obviates the need for any global positioning sensors such as GNSS. A metric refinement step that recovers an accurate metric pose is subsequently applied. Metric localization recovers the ego pose in a factor graph optimization process based on local landmarks. Centimeter-level accuracy is demonstrated by a set of experiments in an urban environment. To this end, two localization estimates are computed for two independent cameras mounted on the same vehicle. These two independent trajectories are thereafter compared for consistency. Finally, the authors present qualitative experiments of an augmented reality (AR) system that depends on the aforementioned localization solution. Several screen shots of the AR system are shown confirming centimeter-level accuracy and subdegree angular precision.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01539287
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: Sep 9 2014 3:27PM