An Inverse Reinforcement Learning Approach to Car Following Behaviors

In this study the authors provide new insights into the classic car-following theories by learning drivers’ behavioral preferences. The authors model car-following behavior using decision-theoretic techniques. The authors assume the driver is a decision maker acting based on a utility function that assigns the degree of desirability of the driving situation. The method is to use inverse problem in control theory, also known as inverse reinforcement learning in a more modern terminology in machine learning. The authors use a publicly available dataset on the car-following behavior known as the Bosch dataset, which includes headway distance, speed and acceleration data. The simulation results discover the reward function that makes the actual driving behavior in the data preferable to any other behavior. Understanding such behaviors and preferences is becoming crucial as the authors are entering the modern era of transportation automation. Considering drivers’ preferences while designing for automation features would improve the safety and efficiency of the driving environment while ensuring desirable and comfortable setting for those inside the vehicles.

  • Supplemental Notes:
    • This paper was sponsored by TRB committee AHB45 Standing Committee on Traffic Flow Theory and Characteristics.
  • Corporate Authors:

    Transportation Research Board

    500 Fifth Street, NW
    Washington, DC  United States  20001
  • Authors:
    • Hayeri, Yeganeh Mashayekh
    • Kim, Kee-Eung
    • Lee, Daniel
  • Conference:
  • Date: 2016

Language

  • English

Media Info

  • Media Type: Digital/other
  • Features: Figures; References;
  • Pagination: 13p
  • Monograph Title: TRB 95th Annual Meeting Compendium of Papers

Subject/Index Terms

Filing Info

  • Accession Number: 01594103
  • Record Type: Publication
  • Report/Paper Numbers: 16-5602
  • Files: TRIS, TRB, ATRI
  • Created Date: Mar 21 2016 4:40PM