Multi-Agent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC)

Traffic congestion in Greater Toronto Area costs Canada $6 billion/year and is expected to grow up to $15 billion/year in the next few decades. Adaptive Traffic Signal Control (ATSC) is a promising technique to alleviate traffic congestion. For medium-large transportation networks, coordinated ATSC is becoming a challenging problem because the number of system states and actions grows exponentially as the number of networked intersections grows. Efficient and robust controllers can be designed using a multi-agent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. This paper presents a novel, decentralized and coordinated adaptive real-time traffic signal control system using Multi-Agent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLINATSC) that aims to minimize the total vehicle delay in the traffic network. The system is tested using microscopic traffic simulation software (PARAMICS) on a network of 5 signalized intersections in Downtown Toronto. The performance of MARLIN-ATSC is compared against two approaches: the conventional pretimed signal control (B1) and independent RL-based control agents (B2), i.e. with no coordination. The results show that network-wide average delay savings range from 32% to 63% relative to B1 and from 7% to 12% relative to B2 under different demand levels and arrival profiles.

Language

  • English

Media Info

  • Media Type: Web
  • Features: References;
  • Pagination: pp 319-326
  • Monograph Title: 15th International IEEE Conference on Intelligent Transportation Systems (ITSC 2012)

Subject/Index Terms

Filing Info

  • Accession Number: 01568018
  • Record Type: Publication
  • ISBN: 9781467330640
  • Files: TRIS
  • Created Date: Jun 26 2015 5:12PM