Online model-based reinforcement learning for decision-making in long distance routes

In road transportation, long-distance routes require scheduled driving times, breaks, and rest periods, in compliance with the regulations on working conditions for truck drivers, while ensuring goods are delivered within the time windows of each customer. However, routes are subject to uncertain travel and service times, and incidents may cause additional delays, making predefined schedules ineffective in many real-life situations. This paper presents a reinforcement learning (RL) algorithm capable of making en-route decisions regarding driving times, breaks, and rest periods, under uncertain conditions. The authors proposal aims at maximizing the likelihood of on-time delivery while complying with drivers’ work regulations. They use an online model-based RL strategy that needs no prior training and is more flexible than model-free RL approaches, where the agent must be trained offline before making online decisions. Their proposal combines model predictive control with a rollout strategy and Monte Carlo tree search. At each decision stage, their algorithm anticipates the consequences of all the possible decisions in a number of future stages (the lookahead horizon), and then uses a base policy to generate a sequence of decisions beyond the lookahead horizon. This base policy could be, for example, a set of decision rules based on the experience and expertise of the transportation company covering the routes. Their numerical results show that the policy obtained using their algorithm outperforms not only the base policy (up to 83%), but also a policy obtained offline using deep Q networks (DQN), a state-of-the-art, model-free RL algorithm.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01855708
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Aug 24 2022 3:02PM