Vehicle Tracking Using Surveillance With Multimodal Data Fusion

Vehicle location prediction or vehicle tracking is a significant topic within connected vehicles. This task, however, is difficult if merely a single modal data is available, probably causing biases and impeding the accuracy. With the development of sensor networks in connected vehicles, multimodal data are becoming accessible. Therefore, the authors propose a framework for vehicle tracking with multimodal data fusion. Specifically, they fuse the results of two modalities, images and velocities, in their vehicle-tracking task. Images, being processed in the module of vehicle detection, provide visual information about the features of vehicles, whereas velocity estimation can further evaluate the possible locations of the target vehicles, which reduces the number of candidates being compared, decreasing the time consumption and computational cost. The authors' vehicle detection model is designed with a color-faster R-CNN, whose inputs are both the texture and color of the vehicles. Meanwhile, velocity estimation is achieved by the Kalman filter, which is a classical method for tracking. Finally, a multimodal data fusion method is applied to integrate these outcomes so that vehicle-tracking tasks can be achieved. Experimental results suggest the efficiency of their methods, which can track vehicles using a series of surveillance cameras in urban areas.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01677122
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: Jul 31 2018 8:03AM