Scheduling the Operation of a Connected Vehicular Network Using Deep Reinforcement Learning

Driven by the expeditious evolution of the Internet of Things, the conventional vehicular ad hoc networks will progress toward the Internet of Vehicles (IoV). With the rapid development of computation and communication technologies, IoV promises huge commercial interest and research value, thereby attracting a large number of companies and researchers. In an effort to satisfy the driver’s well-being and demand for continuous connectivity in the IoV era, this paper addresses both safety and quality-of-service (QoS) concerns in a green, balanced, connected, and efficient vehicular network. Using the recent advances in training deep neural networks, the authors exploit the deep reinforcement learning model, namely deep Q-network, which learns a scheduling policy from high-dimensional inputs corresponding to the current characteristics of the underlying model. The realized policy serves to extend the lifetime of the battery-powered vehicular network while promoting a safe environment that meets acceptable QoS levels. Their presented deep reinforcement learning model is found to outperform several scheduling benchmarks in terms of completed request percentage (10–25%), mean request delay (10–15%), and total network lifetime (5–65%).

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01707495
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: Jun 4 2019 1:41PM