Rule-Constrained Reinforcement Learning Control for Autonomous Vehicle Left Turn at Unsignalized Intersection

Controlling an autonomous vehicle's unprotected left turn at an intersection is a challenging task. Traditional rule-based autonomous driving decision and control algorithms struggle to construct accurate and trustworthy mathematical models for such circumstances, owing to their considerable uncertainty and unpredictability. To overcome this problem, a rule-constrained reinforcement learning (RCRL) control method is proposed in this work for autonomous driving. To train a reinforcement learning controller with rule constraints, outcomes of the path planning module are used as a goal condition in the reinforcement learning framework. Since they include vehicle dynamics, the proposed approach is safer and more reliable compared to end-to-end learning, thereby ensuring that the generated trajectories are locally optimal while adjusting to unpredictable situations. In the experiments, a highly randomized two-way four-lane intersection is established based on the CARLA simulator to verify the effectiveness of the proposed RCRL control method. Accordingly, the results show that the proposed method can provide real-time safe planning and ensure high passing efficiency for autonomous vehicles in the unprotected left turn task.

  • Record URL:
  • Availability:
  • Supplemental Notes:
    • Abstract reprinted with permission of the Institution of Engineering and Technology.
  • Authors:
    • Cai, Yingfeng
    • Zhou, Rong
    • Wang, Hai
    • Sun, Xiaoqiang
    • Chen, Long
    • Li, Yicheng
    • Liu, Qingchao
    • He, Youguo
  • Publication Date: 2023-11

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01916380
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Apr 23 2024 10:47AM