Highway Exiting Planner for Automated Vehicles Using Reinforcement Learning

Exiting from highways in crowded dynamic traffic is an important path planning task for autonomous vehicles (AVs). This task can be challenging because of the uncertain motion of surrounding vehicles and limited sensing/observing window. Conventional path planning methods usually compute a mandatory lane change (MLC) command, but the lane change behavior (e.g., vehicle speed and gap acceptance) should also adapt to traffic conditions and the urgency for exiting. In this paper, the authors propose a reinforcement learning-enhanced highway-exit planner. The learning-based strategy learns from past failures and adjusts the vehicle motion when the AV fails to exit. The reinforcement learning is based on the Monte Carlo tree search (MCTS) approach. The proposed learning-enhanced highway-exit planner is tested 6000 times in stochastic simulations. The results indicate that the proposed planner achieves a higher probability of successful highway exiting than a benchmark MLC planner.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01768814
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: Feb 19 2021 1:57PM