Priority-Aware Task Offloading in Vehicular Fog Computing Based on Deep Reinforcement Learning

Vehicular fog computing (VFC) has been expected as a promising scheme that can increase the computational capability of vehicles without relying on servers. Comparing with accessing the remote cloud, VFC is suitable for delay-sensitive tasks because of its low-latency vehicle-to-vehicle (V2V) transmission. However, due to the dynamic vehicular environment, how to motivate vehicles to share their idle computing resource while simultaneously evaluating the service availability of vehicles in terms of vehicle mobility and vehicular computational capability in heterogeneous vehicular networks is a main challenge. Meanwhile, tasks with different priorities of a vehicle should be processed with different efficiencies. In this work, the authors propose a task offloading scheme in the context of VFC, where vehicles are incentivized to share their idle computing resource by dynamic pricing, which comprehensively considers the mobility of vehicles, the task priority, and the service availability of vehicles. Given that the policy of task offloading depends on the state of the dynamic vehicular environment, the authors formulate the task offloading problem as a Markov decision process (MDP) aiming at maximizing the mean latency-aware utility of tasks in a period. To solve this problem, the authors develop a soft actor-critic (SAC) based deep reinforcement learning (DRL) algorithm for the sake of maximizing both the expected reward and the entropy of policy. Finally, extensive simulation results validate the effectiveness and superiority of the authors' proposed scheme benchmarked with traditional algorithms.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01765808
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Feb 2 2021 10:20AM