High Precision and Robust Vehicle Localization Algorithm With Visual-LiDAR-IMU Fusion

Simultaneous localization and mapping (SLAM) has been indispensable for autonomous driving vehicles. Since the visual images are vulnerable to light interference and the light detection and ranging (LiDAR) heavily depends on geometric features of the surrounding scene, only relying on a camera or LiDAR show limitations in challenging environment. This article proposes a Visual-LiDAR-IMU fusion method for high precision and robust vehicle localization. In the front end, the LiDAR point cloud is used to obtain the depth information of visual features with the synchronized IMU measurements are input into the pose estimation module in a loose-coupled manner. In the back end, two critical strategies are proposed to reduce the computation amount of the algorithm. Where the balanced selection strategy is based on keyframe and sliding window algorithms, and the classification optimization strategy is based on feature points and pose estimation assistance. In addition, an improved loop detection algorithm based on Iterative Closest Point (ICP) is proposed to reduce large-scale drift. Experimental results on the real-world scenes show that the average positioning error of the authors' algorithm is 1.10 m, 0.91 m, 1.04 m in x, y, z-direction, the average rotation error is 1.03 deg, 0.81 deg, 0.70 deg for roll, pitch, yaw, and the average resource utilization rate is 32.04% (CPU) and 13.18% (memory), the average consumption time is 24.87 ms. Compared with ORB-SLAM3, LVIO, LVI-SAM, R³ LIVE and Fast-LIVO algorithms, the proposed algorithm has a better performance on both accuracy and robustness with best real-time performance.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01930100
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Sep 13 2024 10:33AM