Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision [supporting dataset]

Reconstructing 4D vehicular activity (3D space and time) from cameras is useful for autonomous vehicles, commuters and local authorities to plan for smarter and safer cities. Traffic is inherently repetitious over long periods, yet current deep learning-based 3D reconstruction methods have not considered such repetitions and have difficulty generalizing to new intersection-installed cameras. The authors present a novel approach exploiting longitudinal (long-term) repetitious motion as self-supervision to reconstruct 3D vehicular activity from a video captured by a single fixed camera. Starting from off-the-shelf 2D keypoint detections, the algorithm optimizes 3D vehicle shapes and poses, and then clusters their trajectories in 3D space. The 2D keypoints and trajectory clusters accumulated over long-term are later used to improve the 2D and 3D keypoints via self-supervision without any human annotation. These method improves reconstruction accuracy over state of the art on scenes with a significant visual difference from the keypoint detector's training data, and has many applications including velocity estimation, anomaly detection and vehicle counting. The authors demonstrate results on traffic videos captured at multiple city intersections, collected using iPhones, YouTube, and other public datasets.

Language

  • English

Media Info

  • Media Type: Dataset
  • Dataset publisher:

    Mobility21

    Carnegie Mellon University
    Pittsburgh, PA  United States 

Subject/Index Terms

Filing Info

  • Accession Number: 01779781
  • Record Type: Publication
  • Contract Numbers: 69A3551747111
  • Files: UTC, TRIS, ATRI, USDOT
  • Created Date: Aug 25 2021 11:41AM