Deep Reinforcement Learning Empowered Resource Allocation in Vehicular Fog Computing

Recent advances in fog computing had significantly impacted the development of the Internet of Vehicles (IoV). Rapidly growing on-vehicle applications demand low-latency computing, enormously pressuring fog servers. Vehicular fog computing (VFC) can relieve the pressure of fog servers by utilizing the idle resources of neighbor vehicles to complete on-vehicle application tasks, especially in the near future 6G environment. However, the mobility of the vehicle adds tremendous complexity to the allocation of on-vehicular resources. So in the dynamic vehicular network, accurately learning about the vehicle user's real-time demand and timely allocating idle resources of neighbor vehicles are the keys to the optimal allocation of resources. This paper proposes a three-layer VFC cooperation architecture to achieve cooperation between vehicles, which can dynamically coordinate resource allocation for IoV. The VFC cooperation architecture predicts traffic flow through the deep learning (DL) method, to intelligently estimate the number of vehicle resources and tasks. Further, the deep reinforcement learning (DRL) method is used to dynamically and adaptively decide the matching time between the vehicle resources and tasks, so as to maximize the success rate of vehicle resource allocation. Finally, experiments show that the authors' method improves the matching benefits by nearly 1.2 times compared with the baseline methods.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01926902
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Aug 12 2024 9:13AM