Reinforcement Learning–Based Attack on Adaptive Traffic Control Systems

Adaptive Traffic Control Systems (ATCS) are promising solutions for urban traffic control in modern cities. Seeking to mitigate congestion, ATCS continuously adapt traffic signal timing to meet real-time traffic demand. Nevertheless, due to their present lack of security, these systems remain vulnerable to cyber-attacks. Given that these systems adapt to real-time traffic demand dynamically, their response is not known beforehand. The authors believe that attacking an adaptive system may more easily remain unnoticed than would be if an attacker targeted a traditional system whose operation is fixed. However, going unnoticed requires a higher sophistication. With this aim in mind, they developed a test bed that enables the simulation of intelligent insider attacks on a generic ATCS. It integrates Deep Reinforcement Learning (DRL) techniques to control traffic in a road network with a microscopic traffic simulation tool. A DRL-based agent using the Proximal Policy Optimization (PPO) learning algorithm was trained to produce congestion on a road network constituted of 25 intersections. The agent was able to inject illegitimate computer-based commands to disrupt the system's normal response and therefore induce congestion onto the road network. These experiments show that the agent increased the global delay of vehicles by 8.2% and the number of near-to-zero speed vehicles by 10.6% of the total of vehicles in the road network. The agent also reduced the speed by 2.27 m/s or 23.9% of the initial average global speed.


  • English

Media Info

  • Media Type: Digital/other
  • Features: Figures; References; Tables;
  • Pagination: 22p

Subject/Index Terms

Filing Info

  • Accession Number: 01857841
  • Record Type: Publication
  • Report/Paper Numbers: TRBAM-22-03444
  • Files: TRIS, TRB, ATRI
  • Created Date: Sep 15 2022 4:53PM