A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision

Information on the spatiotemporal characteristics of vehicles on a bridge is vital for assessment of the traffic density and stress state of the bridge. To obtain such information, the authors propose a method based on computer vision, including detection by Faster Region-based Convolutional Neural Network (Faster R-CNN), multiple object tracking, and image calibration. An image dataset of eight vehicle types is used for training the Faster R-CNN. Multiple object tracking and image calibration, combined with detection of each of the video frames, are used for acquiring vehicle parameters. Tracking is based on estimation of distances between vehicles. Image calibration is based on moving vehicles with known lengths, and serves as the three-dimensional (3D) template for calculation of vehicle parameters. After vehicle parameters are obtained, their spatiotemporal information can be derived. The system has a frame rate of 16 frames-per-second and requires just two cameras for the input. The method is used on a dual-tower cable-stayed bridge, and the vehicle identification accuracies are about 90% and 73% in the virtual detection region, with speed errors of most vehicles at less than 6%.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01782855
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Aug 6 2021 3:14PM