MA-VIED: A Multisensor Automotive Visual Inertial Event Dataset

Visual Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) have experienced increasing interest in both the consumer and racing automotive sectors in recent decades. With the introduction of novel neuromorphic vision sensors, it is now possible to accurately localize a vehicle even under complex environmental conditions, leading to an improved and safer driving experience. In this paper, the authors propose MA-VIED, a large-scale driving dataset that collects race track-like loops, maneuvers, and standard driving scenarios, all bundled in a rich sensory dataset. MA-VIED provides highly accurate IMU data, standard and event camera streams, and RTK position data from a dual GPS antenna, both of which are hardware-synchronized with all cameras and IMU data. In addition, they collect accurate wheel odometry data and other data from the vehicle’s CAN bus. The dataset contains 13 sequences collected in urban, suburban, and racetrack-like environments with varying lighting conditions and driving dynamics. They provide ground-truth RTK data for algorithms evaluation and the calibration sequences for both IMU and cameras. They then present three tests to demonstrate how MA-VIED can be suitable for monocular VIO applications, using state-of-the-art VIO algorithms and an EKF-based sensor fusion solution. The experimental results show that MA-VIED can support the development and prototyping of novel automotive-oriented frame and event-based monocular VIO algorithms.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01918204
  • Record Type: Publication
  • Files: TRIS
  • Created Date: May 10 2024 4:51PM