Deep Reinforcement Learning Approach for Improving Freeway Lane Reduction Bottlenecks Throughput Via Variable Speed Limit Control Through Connected Vehicles

Connected vehicles (CVs) will enable various applications to improve the efficiency of traffic flow. The focus of this paper is on improving the efficiency of a freeway bottleneck through a variable speed limit (VSL) imposed on CVs. A freeway with a lane reduction, where three lanes merge into two lanes, is modeled in a microscopic simulation environment. For determining the optimal VSLs under time varying demand, a Reinforcement Learning (RL) algorithm is proposed. The RL algorithm is implemented in the simulation environment for controlling a VSL in the upstream to manipulate the inflow of vehicles to the bottleneck to minimize delays and increase the throughput. CVs are assumed to receive VSL messages through Infrastructure-to-Vehicle (I2V) communications technologies. An Asynchronous Advantage Actor-Critic (A3C) RL algorithm is implemented to determine optimal VSL policies. Through the RL control algorithm, the speed of CVs is manipulated in the upstream of the bottleneck to avoid or minimize congestion. Various market penetration rates (MPRs) for CVs are considered in the simulation. It is demonstrated that the RL algorithm can adapt to the stochastic arrivals of CVs and achieve significant improvements even at low MPRs of CVs. The paper presents numerical experiments demonstrating the effectiveness of the RL algorithm under varying MPRs of CVs.

Language

  • English

Media Info

  • Media Type: Digital/other
  • Features: Figures; References; Tables;
  • Pagination: 19p

Subject/Index Terms

Filing Info

  • Accession Number: 01764254
  • Record Type: Publication
  • Report/Paper Numbers: TRBAM-21-03293
  • Files: TRIS, TRB, ATRI
  • Created Date: Dec 23 2020 11:23AM