Hierarchical Routing for Vehicular Ad Hoc Networks via Reinforcement Learning

Vehicular ad hoc network is a collection of vehicles and associated road-side infrastructure, which is able to provide mobile wireless communication services. This highly dynamic topology structure is still open to many routing and message forwarding challenges. This paper addresses the issue of message delivery from vehicle to a fixed destination, by hopping over neighboring vehicles. We propose a reinforcement-learning-based hierarchical protocol called QGrid to improve the message deliver ratio with minimum possible delay and hops. The protocol works at two levels. First, it divides the geographical area into smaller grids and finds the next optimal grid toward the destination. Second, it discovers a vehicle inside or moving toward the next optimal grid for message relaying. There is no need of routing tables as the protocol builds a Q-value table based on the traffic flow in neighbor grids, which is then used for the grid selection. The vehicle selection process can employ different strategies, like, greedy selection of nearest neighbor, or solution based on the two-order Markov chain prediction of neighbor movement. This combination makes QGrid an offline and online solution. QGrid is further improved giving higher priority to vehicles with fixed routes and better communication capabilities, like buses, when making the vehicle selection. We have carried out extensive simulation evaluation by using real-world vehicular traces to measure the performance of our proposed schemes. The simulation comparisons among QGrid with/without bus aid, and existing position-based routing protocols, show the great improvement in the delivery percentage by our proposed routing protocol.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01696543
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Feb 21 2019 1:59PM