Multi-agent deep reinforcement learning for adaptive coordinated metro service operations with flexible train composition

This paper presents an adaptive control system for coordinated metro operations with flexible train composition by using a multi-agent deep reinforcement learning (MADRL) approach. The control problem is formulated as a Markov decision process (MDP) with multiple agents regulating different service lines in a metro network with passenger transfer. To ensure the overall computational effectiveness and stability of the control system, the authors adopt an actor–critic reinforcement learning framework in which each control agent is associated with a critic function for estimating future system states and an actor function deriving local operational decisions. The critics and actors in the MADRL are represented by multi-layer artificial neural networks (ANNs). A multi-agent deep deterministic policy gradient (MADDPG) algorithm is developed for training the actor and critic ANNs through successive simulated transitions over the entire metro network. The developed framework is tested with a real-world scenario in Bakerloo and Victoria Lines of London Underground, UK. Experiment results demonstrate that the proposed method can outperform previous centralized optimization and distributed control approaches in terms of solution quality and performance achieved. Further analysis shows the merits of MADRL for coordinated service regulation with flexible train composition. This study contributes to real-time coordinated metro network services with flexible train composition and advanced optimization techniques.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01848029
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Jun 6 2022 4:54PM