A Deep Reinforcement Learning-Based Distributed Connected Automated Vehicle Control under Communication Failure
To stabilize traffic flow when mobile communications fail, the authors propose a deep reinforcement learning (DRL)-based distributed longitudinal control for connected and automated vehicles (CAVs). Vehicle-to-vehicle communication is included in the DRL training to emulate varying information flow topologies (IFTs). Dynamic data fusion smooths the jumpy control signal caused by the dynamic IFTs. Each CAV controlled by the DRL-based agent receives real-time information from CAVs ahead of it and takes longitudinal actions to maintain equilibrium in mixed-vehicle traffic. The authors use simulations to tune communication adjustments and to validate the performance and traffic management capability of their proposed algorithm.
- Record URL:
-
Availability:
- Find a library where document is available. Order URL: http://worldcat.org/issn/10939687
-
Authors:
- Shi, Haotian
- Zhou, Yang
- Wang, Xin
- Fu, Sicheng
- Gong, Siyuan
-
0000-0001-7640-6603
- Ran, Bin
- Publication Date: 2022-12
Language
- English
Media Info
- Media Type: Web
- Features: Figures; References; Tables;
- Pagination: pp 2033-2051
-
Serial:
- Computer-Aided Civil and Infrastructure Engineering
- Volume: 37
- Issue Number: 15
- Publisher: Blackwell Publishing
- ISSN: 1093-9687
- Serial URL: http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1467-8667
Subject/Index Terms
- TRT Terms: Connected vehicles; Longitudinal control; Machine learning; Traffic flow; Vehicle mix; Vehicle to vehicle communications
- Subject Areas: Data and Information Technology; Highways; Operations and Traffic Management; Vehicles and Equipment;
Filing Info
- Accession Number: 01892729
- Record Type: Publication
- Files: TRIS
- Created Date: Sep 11 2023 11:42AM