A Reinforcement Learning Technique for Optimizing Downlink Scheduling in an Energy-Limited Vehicular Network

In a vehicular network where roadside units (RSUs) are deprived from a permanent grid-power connection, vehicle-to-infrastructure (V2I) communications are disrupted once the RSU's battery is completely drained. These batteries are recharged regularly either by human intervention or using energy harvesting techniques, such as solar or wind energy. As such, it becomes particularly crucial to conserve battery power until the next recharge cycle in order to maintain network operation and connectivity. This paper examines a vehicular network whose RSU dispossesses a permanent power source but is instead equipped with a large battery, which is periodically recharged. In what follows, a reinforcement learning technique, i.e., protocol for energy-efficient adaptive scheduling using reinforcement learning (PEARL), is proposed for the purpose of optimizing the RSU's downlink traffic scheduling during a discharge period. PEARL's objective is to equip the RSU with the required artificial intelligence to realize and, hence, exploit an optimal scheduling policy that will guarantee the operation of the vehicular network during the discharge cycle while fulfilling the largest number of service requests. The simulation input parameters were chosen in a way that guarantees the convergence of PEARL, whose exploitation showed better results when compared with three heuristic benchmark scheduling algorithms in terms of a vehicle's quality of experience and the RSU's throughput. For instance, the deployment of the well-trained PEARL agent resulted in at least 50% improved performance over the best heuristic algorithm in terms of the percentage of vehicles departing with incomplete service requests.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01644992
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Jun 22 2017 4:11PM