VTGNet: A Vision-Based Trajectory Generation Network for Autonomous Vehicles in Urban Environments

Traditional methods for autonomous driving are implemented with many building blocks from perception, planning and control, making them difficult to generalize to varied scenarios due to complex assumptions and interdependencies. Recently, the end-to-end driving method has emerged, which performs well and generalizes to new environments by directly learning from expert-provided data. However, many existing methods on this topic neglect to check the confidence of the driving actions and the ability to recover from driving mistakes. In this paper, the authors develop an uncertainty-aware end-to-end trajectory generation method based on imitation learning. It can extract spatiotemporal features from the front-view camera images for scene understanding, and then generate collision-free trajectories several seconds into the future. The experimental results suggest that under various weather and lighting conditions, the authors' network can reliably generate trajectories in different urban environments, such as turning at intersections and slowing down for collision avoidance. Furthermore, closed-loop driving tests suggest that the proposed method achieves better cross-scene/platform driving results than the state-of-the-art (SOTA) end-to-end control method, where the authors' model can recover from off-center and off-orientation errors and capture 80% of dangerous cases with high uncertainty estimations.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01785357
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Oct 22 2021 5:16PM