Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning

This paper studies the joint communication, caching and computing design problem for achieving the operational excellence and the cost efficiency of the vehicular networks. Moreover, the resource allocation policy is designed by considering the vehicle's mobility and the hard service deadline constraint. These critical challenges have often been either neglected or addressed inadequately in the existing work on the vehicular networks because of their high complexity. The authors develop a deep reinforcement learning with the multi-timescale framework to tackle these grand challenges in this paper. Furthermore, the authors propose the mobility-aware reward estimation for the large timescale model to mitigate the complexity due to the large action space. Numerical results are presented to illustrate the theoretical findings developed in the paper and to quantify the performance gains attained.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01687136
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Nov 27 2018 4:38PM