Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning
This paper studies the joint communication, caching and computing design problem for achieving the operational excellence and the cost efficiency of the vehicular networks. Moreover, the resource allocation policy is designed by considering the vehicle's mobility and the hard service deadline constraint. These critical challenges have often been either neglected or addressed inadequately in the existing work on the vehicular networks because of their high complexity. The authors develop a deep reinforcement learning with the multi-timescale framework to tackle these grand challenges in this paper. Furthermore, the authors propose the mobility-aware reward estimation for the large timescale model to mitigate the complexity due to the large action space. Numerical results are presented to illustrate the theoretical findings developed in the paper and to quantify the performance gains attained.
- Record URL:
-
Availability:
- Find a library where document is available. Order URL: http://worldcat.org/issn/00189545
-
Supplemental Notes:
- Copyright © 2018, IEEE.
-
Authors:
- Tan, Le Thanh
- Hu, Rose Qingyang
- Publication Date: 2018-11
Language
- English
Media Info
- Media Type: Web
- Features: References;
- Pagination: pp 10190-10203
-
Serial:
- IEEE Transactions on Vehicular Technology
- Volume: 67
- Issue Number: 11
- Publisher: Institute of Electrical and Electronics Engineers (IEEE)
- ISSN: 0018-9545
- Serial URL: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=25
Subject/Index Terms
- TRT Terms: Computer models; Machine learning; Quality of service; Task analysis; Wireless communication systems
- Subject Areas: Data and Information Technology; Highways; Vehicles and Equipment;
Filing Info
- Accession Number: 01687136
- Record Type: Publication
- Files: TRIS
- Created Date: Nov 27 2018 4:38PM