Where Does the Driver Look? Top-Down-Based Saliency Detection in a Traffic Driving Environment

A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, the authors analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. The authors found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. The authors proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, the authors build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that their proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01612572
  • Record Type: Publication
  • Files: TLIB, TRIS
  • Created Date: Jun 28 2016 9:44AM