Multi-modal vehicle trajectory prediction based on mutual information

Autonomous vehicles need to have the ability to predict the motion of surrounding vehicles, which will help to avoid potential accidents and make the best decision to ensure safety and comfort. The interactions among vehicles and those between them and the uncertainty of driving intention make trajectory prediction a challenging task. This study presents a long short-term memory (LSTM) model for the task of trajectory prediction to account for both the mutual information and the multi-modal intention. The model consists of a data fusion encoder and a multi-modal decoder. The data fusion encoder summarises the mutual information by multi-LSTM with shared parameters and the multi-modal decoder generates trajectories based on driving intention. In addition, mixture density network is added to output a probabilistic prediction which improves the reliability of prediction results. NGSIM data set is used for training and testing. The results show that the proposed model can better capture the interactive driving behaviour and outperforms the state-of-the-art methods in root-weighted square error of displacement and velocity.


  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01736545
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Apr 22 2020 12:24PM