Safe-State Enhancement Method for Autonomous Driving via Direct Hierarchical Reinforcement Learning

Reinforcement learning (RL) has shown excellent performance in the sequential decision-making problem, where safety in the form of state constraints is of great significance in the design and application of RL. Simple constrained end-to-end RL methods might lead to significant failure in a complex system like autonomous vehicles. In contrast, some hierarchical RL (HRL) methods generate driving goals directly, which could be closely combined with motion planning. With safety requirements, some safe-enhanced RL methods add post-processing modules to avoid unsafe goals or achieve expectation-based safety, which accepts the existence of unsafe states and allows some violations of safe constraints. However, ensuring state safety is vital for autonomous vehicles. Therefore, this paper proposes a state-based safety enhancement method for autonomous driving via direct hierarchical reinforcement learning. Finally, the authors design a constrained reinforcement learner based on the State-based Constrained Markov Decision Process (SCMDP), where a learnable safety module could adjust the constraint strength adaptively. They integrate a dynamic module in the policy training and generate future goals considering safety, temporal-spatial continuity, and dynamic feasibility, which could eliminate dependence on the prior model. Simulations in the typical highway scenes with uncertainties show that the proposed method has better training performance, higher driving safety in interactive scenes, more decision intelligence in traffic congestions, and better economic driving ability on roads with changing slopes.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01900667
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Nov 28 2023 10:27AM