Deep Deterministic Policy Gradient for High-Speed Train Trajectory Optimization

This paper proposes a novel train trajectory optimization approach for high-speed railways. The authors restrict their attention to single train operation scenarios with different scheduled/rescheduled running times aiming at generating optimal train recommended trajectories in real time, which can ensure punctuality and energy efficiency of train operation. A learning-based approach deep deterministic policy gradient (DDPG) is designed to generate optimal train trajectories based on the offline training from the interaction between the agent and the trajectory simulation environment. An allocating running time and selecting operation modes (ARTSOM) algorithm is proposed to improve train punctuality and give a series of discrete operation modes (full traction, cruising, coasting, full braking), and thus to produce a feasible training set for DDPG, which can speed up the training process. Numerical experiments show that an optimized speed profile can be generated by DDPG within seconds on a realistic railway line. In addition, the results demonstrate the generalization ability of trained DDPG in solving TTO problems with different running times and line conditions.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01860211
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Sep 30 2022 5:00PM