Automatic Critical Event Extraction and Semantic Interpretation by Looking-Inside

Data-driven systems are becoming prevalent for driver assistance systems and with large scale data collection such as from the 100-Car Study and the second Strategic Highway Research Program (SHRP2), there is a need for automatic extraction of critical driving events and semantic characterization of driving. This is especially necessary for videos looking at the driver since manual extraction and annotation is time-consuming and subjective. This labeling process is often overlooked and undervalued, even though data mining is the first critical step in the design and development of machine vision based algorithms for predictive, safety systems. In this paper, the authors define and implement quantitative measures of vocabularies often used by data reductionist when labeling videos of looking at the driver. This is demonstrated on a significantly large sum of data containing almost 200 minutes (600 000 frames total from two videos of looking-in) of multiple drivers collected by UCSD-LISA. The authors qualitatively show the advantages of automatically extracting such information on this relatively large scale data.


  • English

Media Info

  • Media Type: Web
  • Features: References;
  • Pagination: pp 2274-2279
  • Monograph Title: 18th International IEEE Conference on Intelligent Transportation Systems (ITSC 2015)

Subject/Index Terms

Filing Info

  • Accession Number: 01599792
  • Record Type: Publication
  • ISBN: 9781467365956
  • Files: TRIS
  • Created Date: May 2 2016 3:22PM