End-to-End Autonomous Driving With Semantic Depth Cloud Mapping and Multi-Agent

Focusing on the task of point-to-point navigation for an autonomous driving vehicle, the authors propose a novel deep learning model trained with end-to-end and multi-task learning manners to perform both perception and control tasks simultaneously. The model is used to drive the ego vehicle safely by following a sequence of routes defined by the global planner. The perception part of the model is used to encode high-dimensional observation data provided by an RGBD camera while performing semantic segmentation, semantic depth cloud (SDC) mapping, and traffic light state and stop sign prediction. Then, the control part decodes the encoded features along with additional information provided by GPS and speedometer to predict waypoints that come with a latent feature space. Furthermore, two agents are employed to process these outputs and make a control policy that determines the level of steering, throttle, and brake as the final action. The model is evaluated on CARLA simulator with various scenarios made of normal-adversarial situations and different weathers to mimic real-world conditions. In addition, the authors do a comparative study with some recent models to justify the performance in multiple aspects of driving. Moreover, the authors also conduct an ablation study on SDC mapping and multi-agent to understand their roles and behavior. As a result, the authors' model achieves the highest driving score even with fewer parameters and computation load. To support future studies, the authors share their codes at https://github.com/oskarnatan/end-to-end-driving.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01875448
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Mar 13 2023 10:23AM