Tactical driving decisions of unmanned ground vehicles in complex highway environments: A deep reinforcement learning approach

In this study, a deep reinforcement learning approach is proposed to handle tactical driving in complex highway traffic environments for unmanned ground vehicles. Tactical driving is a challenging topic for unmanned ground vehicles because of its interplay with routing decisions as well as real-time traffic dynamics. The core of the authors' deep reinforcement learning approach is a deep Q-network that takes dynamic traffic information as input and outputs typical tactical driving decisions as action. The reward is designed with the consideration of successful highway exit, average traveling speed, and driving safety and comfort. In order to endow an unmanned ground vehicle with situational traffic information that is critical for tactical driving, the vehicle’s sensor information such as vehicle position and velocity are further augmented through the assessment of the ego-vehicle’s collision risk, potential field, and kinematics and used as input for the deep Q-network model. A convolutional neural network is built and fine-tuned to extract traffic features which facilitate the decision-making process of Q-learning. For model training and testing, a highway simulation platform is constructed with realistic parameter settings obtained from a real-world highway traffic dataset. The performance of the deep Q-network model is validated with extensive simulation experiments under different parameter settings such as traffic density and risk level. The results exhibit the important potentials of the authors' deep Q-network model in learning challenging tactical driving decisions given multiple objectives and complex traffic environment.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01765277
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Feb 5 2021 3:14PM