A Multi-Modal Driver Fatigue and Distraction Assessment System

In this paper, the authors present a multi-modal approach for driver fatigue and distraction detection. Based on a driving simulator platform equipped with several sensors, the authors have designed a framework to acquire sensor data, process and extract features related to fatigue and distraction. Ultimately the features from the different sources are fused to infer the driver’s state of inattention. In the authors work, extract audio, color video, depth map, heart rate, and steering wheel and pedals positions. The authors then process the signals according to three modules, namely the vision module, audio module, and other signals module. The modules are independent from each other and can be enabled or disabled at any time. Each module extracts relevant features and, based on hidden Markov models, produces its own estimation of driver fatigue and distraction. Lastly, fusion is done using the output of each module, contextual information, and a Bayesian network. A dedicated Bayesian network was designed for both fatigue and distraction. The complementary information extracted from all the modules allows a reliable estimation of driver inattention. The authors experimental results show that the authors are able to detect fatigue with 98.4 % accuracy and distraction with 90.5 %.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01608276
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Aug 3 2016 9:57AM