Deep Reinforcement Learning-Based Energy Management for a Series Hybrid Electric Vehicle Enabled by History Cumulative Trip Information

It is essential to develop proper energy management strategies (EMSs) with broad adaptability for hybrid electric vehicles (HEVs). This paper utilizes deep reinforcement learning (DRL) to develop EMSs for a series HEV due to DRL's advantages of requiring no future driving information in derivation and good generalization in solving energy management problem formulated as a Markov decision process. History cumulative trip information is also integrated for effective state of charge guidance in DRL-based EMSs. The proposed method is systematically introduced from offline training to online applications; its learning ability, optimality, and generalization are validated by comparisons with fuel economy benchmark optimized by dynamic programming, and real-time EMSs based on model predictive control (MPC). Simulation results indicate that without a priori knowledge of future trip, original DRL-based EMS achieves an average 3.5% gap from benchmark, superior to MPC-based EMS with accurate prediction; after further applying output frequency adjustment, a mean gap of 8.7%, which is comparable with MPC-based EMS with mean prediction error of 1 m/s, is maintained with concurrently noteworthy improvement in reducing engine start times. Besides, its impressive computation speed of about 0.001 s per simulation step proves its practical application potential, and this method is independent of powertrain topology such that it is applicative for any type of HEVs even when future driving information is unavailable.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01716536
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Aug 26 2019 9:54AM