HammerDrive: A Task-Aware Driving Visual Attention Model

The authors introduce HammerDrive, a novel architecture for task-aware visual attention prediction in driving. The proposed architecture is learnable from data and can reliably infer the current focus of attention of the driver in real-time, while only requiring limited and easy-to-access telemetry data from the vehicle. They build the proposed architecture on two core concepts: 1) driving can be modeled as a collection of sub-tasks (maneuvers), and 2) each sub-task affects the way a driver allocates visual attention resources, i.e., their eye gaze fixation. HammerDrive comprises two networks: a hierarchical monitoring network of forward-inverse model pairs for sub-task recognition and an ensemble network of task-dependent convolutional neural network modules for visual attention modeling. They assess the ability of HammerDrive to infer driver visual attention on data they collected from 20 experienced drivers in a virtual reality-based driving simulator experiment. The authors evaluate the accuracy of their monitoring network for sub-task recognition and show that it is an effective and light-weight network for reliable real-time tracking of driving maneuvers with above 90% accuracy. Their results show that HammerDrive outperforms a comparable state-of-the-art deep learning model for visual attention prediction on numerous metrics with ~13% improvement for both Kullback-Leibler divergence and similarity, and demonstrate that task-awareness is beneficial for driver visual attention prediction.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01852104
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Jul 21 2022 11:30AM